var/home/core/zuul-output/0000755000175000017500000000000015136772764014547 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136775667015520 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000106334115136775524020275 0ustar corecoreT{ikubelet.log]ks6~~_ΗR&{f\ƣ)ceӔÍ8MER6OipTVoY,z:jxiEёXyRdVV_kVM%5g$Xa@AiW Ͳ?=>MЂ+Nv 5EMn[vN̈́hP|CɊϛH,i+ެAZ&uUɷgto-eTxy? 3bDkJQr4)_)$ .jX65~Fx!3p+mjaPlyrihyAK:(< $!8cѬY^,Yr sTJBqƔGSUA󖥄-Q,zڛ=74g=LtL:qX#bMNJ:ǶmT qN@ t3fPpuy ;{;<8.2q_&),O4qcD_0"jPЈfUP<lL$]P)MuyoZPz/5QI 5v,3:h#gq4"#ϼYzuy^jE/2k2/07e("~M\r(@Ԩ2s'>L*ma9n(r2QTbaN'bY$}1?&ߋzC:0~ZE=Z}*BAoe:U},/"J/_6AJ\kaV7pWB\DݠYD%n俞#L\D!K/ >ѐ CJ25݈:A#?Q^@|73TݿΙ2 '+G|DՌB`(!>"2'Cpp1JTI tXo0Bĩ FPR$@S#RK = $T᎛E-$ܽaxCHH*I؃Iu˽I٫H\C"y D o <<>oK"07h7(PK{{H4+A z)p(RGb5vS tvN0g#4*V8E`z#e$ -x7 8RN%$^(p r*6>e.vmg\M\.~rmĉke Kvk3%2OoFEyvb'4w7$ZW/eEMLH =$n!u.ڢm<;n4 q@ʓrTߛKoe<ىc#'ŞD#C ]Wwv2YAJ@0UvclyLyTw8ݳ-z߼[B =M[ʁO Ǹ-H( |? Y`V:;ds;葥 YɔS=u;ld^[_.0gJEE|gfaλtURpq:sQ- gj ]pM 8nC^b 33,4lX)Xj{C!ev e-&&08 \Se!sszخ猴[*NFZ)gfS4gmUDvxRZH(':Ja[}(esf~2N?i@0i& tVZ"EocS5^B@M8u=GӌCʪ{X, |&Blf^>*pvvTqgeh\GTqD3sʳ>rke-Pg.@R砄NN8w r QF=zF:&z7ƟȦ52,pjë?leg:k7pt לH.62Nda ](*CJ19cI_]]~˕B 2=(YMmG@eb$T{Na&rl9H(`UBW> sƞ nMYid$@AxPm{FL;8vSU'C(;"806Ӳg"`Ap"[R^A{Gh>:9•&K]~h;n(#Z? XT|kJY_pDb0"ʪL ƀ_ؚeV$y1,L{:̊ӌz_lD.nYM wíҏM+bz,K3NT:HMdG'/i] Y*,Z'Aײ[ ?B/\gGm-c{ eBdsѵw!c"[]BmHb$hD]xSҸ.av+,gCkCopՒJ<; j5p@mObbG3i&}M7dž>4Wim dhkBPfބ>A~Q ]ۏ%Optew㯻ER5Jm Rt("vn \_fwwl9Ua߫\S\uBHT<[ DGi*9Ńj \7|O7h6I%Pec{61xH  zBKBl"De |/"rĊJD |;sn4 9>kFO{霍[v phC߷ 5-r M D,wbĬMJ zӼPj X_UOʜ\(:lg*y땺dM~z>0gZu BnoEr TM&z~`{8>iiAt%[ʪAVHZ @Dr6TbQ 7+2m |t6}T2kV^NG'A2<7c+ 04D^t7mrx5#)adX|r"Ch|! < JΫd :zQd|DIz[CүTlQdѕ?ܯDDVc(Ea8>@$֔8ű庠ٻ߶mm{VHQ$ 5mnn؊@FmJvҼb;d1ǔ :Ķ$C;Gya(3h:A#;88KfԡB{<)oz*F`oXG#݀c*"_gW_q7eUUWFGķd뗯5m:W>d5KZ=S~RW[1SCQ֕&M?3i.4 x|Eyi\?s -Rw:h"|uMG;2–MbjR"޾:>߽=P޿;:~OOr=e!@!(Hg3`s}2\LbAOB>y5p,_á  (8/G(dnŀ" `%/`M+_&f@\ѡJ0! ϟPΆm p}9i y0E0b?B1( ;z">f2)8ᅪ@\=cAT`I']KB 1po8)Bo2!x.C7h8sqapޓD<t gC " &q~!,b A8&aCL7pO_$po}/C N$l@aC(T p al t1&|7H0(o*FPhaOApϑİ͈`#Ȏ3@xN7"0qダ\H /qlpD(>+CqL`J eN9'a|(ra_:w1R]<19O:p e|)*r&!<'W<7@8΍{[. _8az%G8Pc0y5Է 7HC:) oYHl@.lt,(]z[PhP WXC:,OOJACq3sI0 E0I=3WF0\SܡX 滃 r pv*AX| ay 3pP[kܛיpMcgKE;A/aWK}SC "c_pݕX7!lX\kaBs MſLpvD:y&0Bpa$xȆI*<}^π;Cgn߁ F&oO{a `8!K{h0z / "|H #2uT_ C*|>' 86=&oN;r37I"FZa_G\}s0 03"p!EC k+F@@fD<pwXE \ s֎aN^%{ځ⢭_i3MRz z,ޛrKykp+) eYZd_Jį%Jd7u\{E#?̾,2K#َٔK b"_z|xliSd nkSԡXНPҖ^+eAJH,E~A `^Is3Eo{)UR .@EM>r9%8/ɤ,fH|ȧ%üzhina<9Zd3!e.g8@݇wUlw J'&W8R4 tdZuo|(erjoQR Xb7g`O@\GLÝC2%YLy'$ϋ%I| sٓNOGS'C3v4Kqtfi좽]v|QTCl,ʘ͙8Ybs袗dlj^@K\cDopҢZ~j}/힨O^Cf>LbTdZz?HSGݜ0:˒ߟ fΝ\x˗;XwuÅMacV 읬ַ62cµnVpa#>Amީ&&,b٤tlw65VPƷ C&:< X}Q-c(lgyCfAf3{>wyZ' 4센Y͎!~~u[4҅\5. YY6+[M` u0t9|\77'5##y|RD_=$`-'Jׯ?7Y*-1\n_lKރwmj(p c"ILI ƙbh3r!U *S8RμXQjqLADÝDf4sm7&6w'Pm҉1-Ҝd)KW:~xC} tڈS< ȏq6TM]jGb|0tv4d;+K.Frwc `qͼ. juQBrB[>XuU#u]z EO0/# =va9Q>Rݮ#Ϫ}:[¹'!h/pAH/[aMq m?<>R+L|d-*!JP O$U`͋9fX;J8evξ$v,L,KQY.WTYuRܗi(py"rDnZN M6S,$&qL x<9P`Fj~?KMGcHeX (?L܆=[t2p o"=Vuek>aQ#J#=`wK(ē((tJJ/nTݙ{b] y~:peVMxF^g4P \}z@?a[3~݃t`FݗЅYVٰ[Sv׶}rnӺ+-BXl7Q%e[vu^Pw7A' n!( ʷo/(MPDAz]A-MPz[w^P7A' o! *Tl/MPDA]A- 4M[v 4^p7A' #Z.\s6c ol 97s"91]\"iy yq kd&"9`5qWEc CnUZSpO%!⦌dHv-Oqo5++42~k{ԅy&?ش(9''FtMK%/}eϳmU?xI^22"Cr)#T0j;ΧeADWqj΅w|VwsZ UBɵ:N Бj%:K<ِ>zh9tÌG}xdglk߳dIqa#(>ஏ~;h" pZ/whXT[j9>y$e^*Ej#uZ_QaX Kn])ucŦ8|$[x*O lƺ(]&O@SUZUݭ:y~ㅥi:Y"-֊t8L,VPs-YVSR=zYfrkKhO+p)WK-$yR9^puVU9\9^ 햠qFԃ_qd¼Y>%:E%=rsCcpAUn`QG@SÎu+||aoxHI+=E?h[䡪}P? knQ o.} JJ źRbVhA\q0 N 8!2Z j|&NUyHRmnDnLeeGZI9\?lȨ>uفϥ T2Px- '5\1:Κ5 [fOSW#ymKEf454Tɇ3{\'Sg=g]8tozqhi*T @Xj] w,]dV]F"BKhﬧ6^o nitBk ]/&;Z16~ S?(=?3iwg"l||-ϡ]H:rݑg*.*>bSL,74-~^&ryn{s۪p <"C7`tVBS@26PY $NFeV+(2\K/)Gҵtշ-FtFp(*Kp}~k!"VkcH02[fZV]ҖV$i>=O%:#cBu[ԨG2tҭyF{rҖKzJ7UhZH&IOg! h;ZzuYp )AyA#S2B`!6nCHC0t,T{#VorQ,VXQGzR4%j)85v6@{~w a٣X:I>M&2 %ΈR蕴<"' e N&Tu`#y7$-fɹib`)dDbO f=eASm6`$uUl{&ro< ֺx*[)EsZ2oB"mY=ܶ,Vh=6ɑtyO%D079`Y}D p|M'rw$%艹F\Yjam,m$EQ53T}|D5r:J3n^Xa=Ctoj:8n7LZQȟ~c5;^p_V*O?yy7a>r;ڎg"o$~vàqg?g#{=~S [~`lyvSkogevsPalbэfo[~1don:Qyvt~rO_=Qep? EryyuVrCAx4w4XQIkp'o&GubCJ 3gi=ã+{ne^-ֿk/L]B5@70/CxZt[jL{&_`Q"1ϋ_wSMv_}x,׻/? -u\'|^^]#{͠([mNo)qqi4zpvL ǝ"d<A(038gʗ{'<٥v-e VZ%?wz\,m'w~,{;"t2zB#OaUeHɏ%J0ܮrz nX=|ᘣ\ Eovߓ?2\Ϯ(yZ7MU_vV^~K&E;L@J(ˍ'f~(=/>ʿ&FӇ1惄l:#bdkfݵ x>L5 ï +%jʆE,a]~3h< 7XB2SAXj s`Mu DvBLaߍI)Æ(@1c22 L༊@$xΟ}]C 3] }c ]q!;$X$=$$Qm5p󐟚h☕  B%5>)w1kPnv\=2 x $g(~,1MMXSpӱ3ퟛ{=mE $w D l hH(9DB+I 4 uoZ9j2`sZ*yJFEB5t<ݵQѾ8f)m̎T( d5 EcQl ֐tRŪi09#wHQC9PFN )įk(,59z_z®hm} ,1XVS6:3갫&YzQ^UAރXG3RV1=(Fp, p(S5*E ڀ)hG9-A+ʁx#3k(׾e!K3u)R,ˀ*ݡ46Өس*EOZ`:qNh e1NxЩ%M>T5j^vt( H3c< 3n(f5reJAg]Y-CqAY!"d ZT*(fx;He:* )v_NIUg'/fbK*[,ZDf!gt.a2\\3Aa̱@y,2">ƟR1s@5:Bid`F$YHNPxR&Q;*SS'*ܫ}GYx"4Bv$ 'sB 7Ez2%a ,-kAY P:M-(3s!Z<5 2_ w]T_6:,X.L&&MAۀZZ4y|`äG˨QR@qN't/OG\MaEy'{!nһ'Y`P-8K0I Y+15Ύm{icg-$"91U@}dQ1BU%p[G/t[dӷBZTJ4 Ls)1,#q XE\nˬlxR.Cma Ҧ̌G7GN'EUfLT76|$dֻ;(lS/svqf7fږwFm<5avQоfU"'ӆV&(i?NԊεSD'=Q& Ys9|ۅ#Μd.'4˲%"8 J-n5ӎ_jtv﷢(J#͋6}<0 wY HENz˕J$a*hFjL/|)e?{NxgO2>:]u͗7ئEü 20+'a's&eP:o}y_ɨQ@G FQ5 hs=4Ӊ2MyQ&8 KhBqb-y~K;OpcMr"+;wt#7ȇ WN]s:Ub_.s:gGAΦH%5+F;ieb .Vu%x@LAPKKWI%sR{ðRK:;wS 6ⰦZ<]f; 5cn7rj׹xjz5Qnӝcug#OY{P,Y RI.5GXeb%o6]_.Z2kC f<>i)Fk' %=0Pc %867;.eC] $MgfQ`f\v9dx%s yBN)pKFEʹܺykrc.k݋sY֖j@֮b9J:l L`JzR+!<lQ"^FՋFi(+KtvyF@{wN5:K\:c+64y 7aCJE ->Gu#&8v[ʩ`Nҋ<^Q1w|֖G_r []>"eJ,ςk2ŸFj|W %D煆&Ȥ5dW!vW)_Xp܃IA:q%V+(5K p7S^ΊWs: Ϯl"';P:dA"szaK)lRQЗF]iĒ'Oƭȓӫ]l#5ufRGy"Khk@3Z]3I&8*uQ}'˰4 x]TWH(t TM";OY5]z 7 v7Mۊu-~&sd|Y@\`J"ch$.*kj(5NѾA# M1&;Jj|nrl/9Uj,yq9GrBy)d")+̾jQSiWj"[/Jq g2G4OVzH841fZՄRAaB_T}"IuBzTLVW(~m)RJ e׵Q mS[`Ѥ`te234]* vqXU@ً`_ K# ⤻mxD)GWuTuicCMdI FYhRrmTDI0}a:k (v|׶VC毧mZ81>'֥DgGDk 8sio,iѶKtY3ʎkM&7DINvRmZ#%@mSM͸ T<;gPI9S2n[3pjˆ)ձ:c]Fc-p&.y9Xnq(Tj3㎓+!a no! 3hRqdxH(g q5VkF:M[@6 'iC-*OIzJ vIp('2u=9qO 6*+K @Kt*!cHӓW5V] urO,ӎ.p4ۺoR;/8nQYfBT2\H)-'JApib:g)9xL7Ȝ1ڱEY*;-cqḚE~Zq7-7=}ڹGtd$/Mn"I>2D0/w:[Sخv,7BOmQ^D_-譝vnZ|~,h{=OrEn1d:F !O_46zMMbI V?UߏMIڽ] Y0@.?c 簂t40͙#(}yof\mLx]L'GmixF zLpYqZߘ چWpk/7?q5Ʒ~F,σa Yy>Z tt:f<k?(;Y<\"؁L( W)"1/ ́4, #r&!L}~\qQf_͋kmWE1-k֋Z5Wa>ɦf>r[, &-@TPv_{8ԈGk˱Gib:){6]gE#?97zƓT⿬D.ô$o"ղ^ HqКG$j}qKydR*YRLGeg;Vdǘ.z@W <S+ IV?\` SVՐ\VgE%^9v+= ot1pIZzrmGq}&2/LzYi 2fd|o}/-KkqIK,ўq>O$lZ1ςݚm-:X,x}56gK3NBu0 v/H><뗼J-(}JʛG#}[`BDƒ,d=6- .MPa[=0@4kO"'m 9LRSݺn5X$Y:]/˴+b8xf;FNkE{L8eWahl2TgÄyA7V|n u^" `lՊ_C:I7W!uٛiVH-s "P&K)>:FsnuAO|"lʶȖ(B$wbBSz!vhQ^h*%;.x(e!s$X,dǙ;Fv,®D4nW'!( YlRE834ILt9jVv78P[DǮ\E{ru|X7Ŏy#k=+3",y zLlK(p4M: 5o{$PhEZ{oWWUqg-{jBH~wSv'';CՌ7heg5=6#?߹?%mŠtIaλUB‚wpb _N-f{Nj}3 _vK?B8%=vyWeKoTN(+6 7}t+ᩇ_Zp%;*,hG+-=F6'9+R<+h;`@7*|)z> A@>ҏ$iz;q2J/;x˳ =94-?\m*NPĊn7s#wDڑ2pF/s~ǡd?af.:*(,;+'owbol fsAϚ#vB2b|8,^O3}9#|ZPf$cvݍi$jQ XpػUrM1Xr]E IV3Lq*`Ik"O%r"X5ж>h/ѥ  h0Wƕa2B%JPZ Ztbz{,մ_b|7f_`Ԓa}pb -v++ֻ^ԜS)uH 1(}j/ѩG`fN-'"g=189e^ϓ른+?ogQ4MyLNDX4RB g3E46FS"&QPo[4ݾʂ>G&V'fR(,h`Evm%}i PS`6 WswV8O[Ox6ZiM 4_LuQۤf2ydz1&LC%0,?{< >43 4Jp JYX } "IƙV1Jbd$J3)gm;Q+m;)C+)cIJRL"c Xp~4T&)Gf8ELIgh;Q+h;)#B\㡌}MH Ȇ%&F ")ԊX M  X$M;Օ2NԊ2Nނ2GCmKd2P&45<106DP8o3eeڐT&Abz䛄ߤ].}M*tI!(蹓 5d#)"U"f9#&0@ 2CY.(}k/ʙIΖUw^/NW|/8B_n3E \vhW c/`ׯVgbt%#{"« *00ۊx"4 }/ \z$0Ff@R旳*†MuTɼAk3˫3eo_1l=stj>LXQE͕rkWZl}r3]e$i:_Տߜ/YQd<|̭3݇j^VP)z }4y]l~6Ќ8R ߗE~cL)wk}nF O )`w+do]RSImaxDQ\߯C= !'*r~nZ8O k-V$Ck2yX6x +[xPrCx8\a[ag'qx/f<!J6֮\Rueɉ}L|}Lr9u.V$r$އ˘k&0f:"Q,dJ^e_ukQBZ`ѫJE~hճ00}ÜlH`VغN#}C`21`A11ATcb'C"=^mj-A̚VNyaTQ`=*- puaϝpaēƙv1B adqYrt&|zS8tt l;Na8:E*Xc~BSi^  r }BR"raK:KMJdZW;Av&#EA(HE9jGHQ"%q"QbKW$"} -}ͧVbKhSn.uYK)?{4.r-l)4-To!#N81 SC krƑ5?^ ~:چ_O xYSIqSM2JO/gǧ0||;"tLUNVu(W:~508|B#/c1d?f:C;gl'~1~$":ҳtzy@. W#p[4_$Xb4_$W|שq><]Ry 8KO[zb7z3(t.6<*>ÜS% UKe"<)m,h 2Bx1ÿ xKN ;*$N3b]B 5G2Ydb[(Y턫5g9D %.2ECs9Ϸ[veLl18I2'b[mZp.wJI N(dL+dkQ/WAQ9P:cU-y*^I;ZqL#ƕCjB܉xGDA%8mAf5^Vt;'t%=Z3dK*7A%Zk& I \s޻GpVx} MD-cR,֨(֘GM@*u?;9f<ާqBleDw5y]i8ڧMnOq=n\.|C7Qo)7k@Xt)l;㩻Яz<}[|5Kѹdi Wb4Hi-HQEyorȂV!,x9*vJGv*6$E"J5OsxacadAbdLe9\z.6ȑZ H9(֪XzA]!SSJ aY+Tj^'E%)g%r.c4ЌqRBpL2GƔ5*IC&'՟ESec@l yٸ9 yo[sȤV!/fFr )eb lOB~(DXo0sĊ$G`i1r$ˊ#s0y[~]o;.3=7`{^ jO˳.C~! c\zweJ55/#9%e:L/UikV +s\AfL X.ulNDjʇ2o;N\m{][AK`lgO7UM߶xM|9Q5s/>|I;@ cz K?} g H_H:Wx3PKoqxlNf'dzh#hkj /L5q,c.sջim(k~jXoz{sM痩7~N%PbXNj!_MsG ɗհWT/!96~nm-VΦ{+z.ui:fz+bD|^M؃}9pÓ݂_b}uheN-w~V:ʛN}l9OIſ": EΙT}ɽ’N̍QcsYKkgNsMQy6F{3f@EZъu_cA;Ws9%=>38D0k"=]d[l|5+Fo>1gˣQsѪY'h3L'tfKs6FܦOpzM@miO|<k:U9QKkqvZh,y5,}7A :m~S+-ױq;wt'ֻmP[ l[[hR,cFzٖ{&.v ̫6vԿ7`-gI@ myXM),ꇳJ}^f961ƃLÑ4#Vg _fu g ^0ܙ:Samܮl`'n^s`aX [әq/Ƅ[$ڛQ-QF#P4kӵxUMf^ Gh?n؏`&B/Imm7?"Aw6z?zeөH3m<¼5Mmpe1Z5oZbGX!1*O}_շxHA-n`k '~^\bLiu;Lb2Nv)s#:7g($"}~LVw4ɅJZewZkLbzMP yubCVaa/tCZ[YD`N\w5NT.sVE;;tu :t G Tձs` ܯRLb9?g lto΍“"K/ߟxZO1+*½Un^ λw~8ëw}^vw{+9|w?v7Å?_*~{W-X'h. \8]sw^],?%!]po@t!GWFD}8l3H|>h>[Mp '1l֦U`0</d_9WlcK 5adL;hGؑ #}_yFd+NH j(_93]oH7γnqB#XMN[^HʻfBGkj<$2l"H6sƓ=99r]7yn{K4ӑ!UA,> 1FFjsL }_ow³awWc\LEΠ ɩujL Ip>8Qz,xI_QE+ឭHΛD_#y:$pG!ԒjOR _N AȔeiIA$})X#RT)#5*EJ[}* Q[\^bo52#2 [\^FyHQ"FhX-t89tU8n~w dGUOށyw\_FimVYw߅̲ץI{?ն׌}|ـ7xlה**>\ssO__הh9FK;1A%*1_2l3Z /#T_{NN|DΘz=VK(!Iv,J *X\oqz8o֭ z^5X-]/@Dk%'=^ !a5OsrJtJCR9Ww]WQk\04l]Y^:DVlcF؅rvBl8fġ?6NMd2#2;AXlhio6Ůi1-ռk'ڡ w> && ,!H>c)rmz U_;!<굓HxRK63ZWm*n|r矂tj̘6%0$PMTN|xk||_p6ew‘xGN.zGKjhaﺒg͙g%`cӹ kYi H7'|a4,G-ՒcZq4IɓAQ`X;ou=A߼E!+"Y'r NYI1$R*sTE\ͥb.3AW("b"H>Pzi3%-Xnb J~E3 av# a͋Ktl)+YڬC`_Y4p T9i"p:H>PȪrkH57e5f4FUKƑ pD`G4m#s?[>_6.Uʱ5V;h;{n.sM\arčGQr_RNdl &őWGqq38G65iQnyMN’0sDZo| H7)f#!H60Q L^b&dڂ%H$j<0/p*’ ̾L*2uagqϛpZ;Ä0~qDM=M^kmGv 4>*®+)%m{Ӈkd6./ش3q:Ћqk%qrF GK`lϔ{a' _\_ߜ_}nhSWŇ_ mI3чѨT㬭Ʃ[R?)BF{aHP=ꛋ.!D^K ^UGxuW!pu?tƍx6ХGv}]U,e>_o9LPSa5׷m<~yDp?}7}D؆Mrh0:ݩv }~翾?|ێ%Ne쾹9[?Q].z=5Riߛ;ά5È6 N2 Ug_.oo4E}vwƌ_s8WY{e0Yأ^0t2ۻake Y#u{"/ mw!#޲KO Hvj}&}ЩdԱlcBR|l鐱ˆ@@WMx!)H. Yd=| lI,/]i1\1L2-h=EHcRM\0j5 -j*H62Ym#a"4%5I.f<jM)J *[I/@Z,[Ioxխ4&eNБiƸc%gKMHMZrg0ؗƸ1a 9vI 8F봭izNN;CLBHF % qkOF9s( wK=ϡl]gjx`t*V("$j(Anju6M-+0=ѳ#pJBl֗MDS5zl϶dJ١TɎ-u& AwFhiMƮи :lG[LƼAcRڛp $e YŴw*ίo0}$cl)0 דJsr}#88bKgXjN>sƄ',a[A;@;knM:%RVF*7f_>PB,YVp^?%F0D}2c4 Ɲ F$ts&,#e0 {m: D2ڕDZ+(o w {+zYun[DZ_6%Ml mY?{Ȗd2X2ʋr%@Xnt=654{,N&ף,BdK}^oɅ;)~"ͯA^kVyhv$30bUkfHTp<`ѧWó`֊WxHi"qFC< ?=G*,*I7!%DSy9QA|xp_IMsFϰ>IȮꍾSG/wTheAGq  ^pW[X70Gxb:/f-Gvc/M.Fi~3:ÚLzۡh:W[@}f. Xìѱwh<>rxnͯ:ZLGc@zNiԇOxƁE?69lzl_~H+klnf4{p<3`o tZrt70m2pg !_}]\LG_j9ؽaV `ڟ4F#~qmGV : n] K1:L*FЃ1ϲC䢛[ qd||𷥗cn(QO` a񝁾_wxKUYg.pÜCs[{㸸My%qmG|tW2wde[=tTܮ/Y.اo*mqώ8+/q+ViK$m-<525g+$Clؠe&; T˜ p #\?Pʋ9wPDKA3`",s&0OUЇZ&)߿;U[D*U+ ƱHP[ޭJ f`R# ֍U\Lx >Uv$đB׫E[y9#ZǛ{ZTjl<ݶazݛfwM9-lr<\ $u@TX>JC)ƣ!fdx$bD KhAɶF[`yG>oÐV.5׮ڦ`$W>|FZ `|6֓dc|FXR */ YLR TV\iF"$2+S%_w0M2)Q y6o\Nk:z;?n''۵&A,'sAo.Kzc Wu((wa ~sk:V)nMk-SsLH]5ǭd\(8GAa-$&LN~0CB7~7rKF/?N0^VVLM s'mQonTZӃ CuY4^947ڀw|n}J}_7{$~nè}1ǎC-%,r#=NJ)Afas1+] :)$yQiNYiA@\s)3"9a=:hב}! !'BV2%[ҕKİqєTΘ^5r} !]Mf~Θϗ'#|:'DL>&{dc@┃+*$j,TIHr5@`LK)VS$" 'J>'% s6{R$"ʡe{E25l PZ7ZdL2±6Q # },rpڹgE] h!K+ L)pQ@%KO ֑d7)8"I^Mdw9|uRqZM4O٫?IΓ''Dq=!*J.2~<^x* Q*7|i&:^&J ʌDkENIɍ!lL)(ҁrnԷH{wpD&FBlb+u&Fl%.j4)'9 wٗr<e ؾ3]5WmOv?I[n ˼}PQ .^-HSXK tΗ*}bnPL;"=C|[~t¡h4kژԘ EHmQ4BJ} j$L\+eJ~89_&D _&d3,2' Z碈/SG'<]sy*fx>܈SDa21W[Vˆ1 XA+ڶ&h #HZKg=w-LDzkraE޺~(E\-U9*uCo|NnwMq5Ɂ.ø6L Z[.{㢬*h !KW'O{vO{} ?as\ -^W򛾩%0:ᜢdIy]rt>/t^ȃJ$)loレ0/ak o0?U|.zuIɾs}Ymuf*ϙqf93u_c6O;wWkgקmU>:H򠀱BLW;46})1MSԝb9 zŘ?9lh"lD@.񹾼2 mo.VN&tNNְvӅ{dd3UXPX] e-& LL1 "L1Ru-YP8-󶃌a r0qg'l( HJ5Bm2 AEA!j:2 " 2 NaPC4VVY+5QA.yrp&|o& _z-/!bZZgH ,&IZ)B=SpHf;_Qq3hm_N T0xddHG%/'TFJY.Ts*r.tޖUpOȖf'c˂ڲkRb[RZnO*h 1a3FVCϣ[& u࿋av/2R%v>IO@qS H 9YagԘ^<5m)B:}THn,RfU<gWͨg[:qc$lL! 3)Q,ãa32"5- #[7~ϐi7L\ny} +#Ccogl#Qh3ל%hP-o^b43 v0}X> pPCbͧ0qyyêiy/_JHtʁRN+bee䄠GK8Aq9,lV]@AMr[ U1 ,tXIB_ yfaՇQot3Ϭ> PmEԘgIJcqAd9SfNĊNfN Ԓ6l %xfaIqό0w1c]}M!D u)h=AFm63޽I |at zcܳIo~]ُBB83 َђq5;N*7`/=/.'2U8n/g˜s04܊5&ae0c>2b.8sÃ4X^A Ku9oiOr,1P6n+#Jn'2{Xjo}xdYI\BW$ˌ׮7Io4tu#oަ%~MAV쎹ٷ맆wnlGW^3ֵ!d} qhsߜS$\ >8?vAg:zxoW<9Yd(bp7H8GwO7MO[Bѧåmъ Kcޏ]^Ɖ?1#>6X=̀1Fr ܳ[UO _7񑪺}Nʶ//-Q mn#ed> <.kG 7?f3}D'ެS\.F;Zyz:[K}{o@#l~_Q{䲏{@*Eנ[ W)t:EQUrQ-Wj ^zۿmohy_Ebu%Zb62E>)iFFr4& MJE+-I#7hq\qG_ b~V껫pQRn /J"ׄI'\O1'*UzSyū|!HTT"ZqR,)ϻR{wJ66ȟDN4.Ƈ"'i-$>ҪM7t3*ETKADkMjŒFԞUM8=<Q B@ĊDO4JTJQT]"!=z|aLڦ%NAҚ" əpZfCV/.O iF)V_/T2r (R`1"txxр#QKB CBw=BGM{!Dʦ4cv) d[*\RLM: ђhu${0H<? pM_8D ٦SR)M紋(mMg?PLE1*֚ʼn%UkBͽkIȅJ%.xAM$iP+ FYㄨ9[fSA/ZX 9Px(ry'E"<QBGCh@BB)aMYl7%!@.XSPW2J %"N;cmq)*J2ěm7gaӜɍ@¼l FAw'EZBJ%jcbXʊ[i4kmN)uoI/TcM]2s6tqYrZƂΒtHEjZ)*0R9 OhZKӮyU(|BM,ݎPMѧ[ԺIF$5  _NLvt9 (?id 3ThNKRuFhGc809%KHYL FMJ<<L6HkV)3@@5 jxNԱ`QaVM ]/GO::PS ""EiM G`h`"tkRm k詆p$0hY 1)tB^UpV!rND` h frh\xA&mY )6:4WժNfihbD ֏*-h΢t`96_dNxb&- ,\I -ICyu+7|2UK ͠2ò5MΎ %Xjs( kD(aS )Nf ,SzfGb4b᪁}V$tu.v4rڤ09@b@2pQ.MJk 5$ZƇdU@~ IjJ~`!0=R lU G.,OELM[G+P*Em$ǃU)rTa:+(2CU<{,%@Z;`d#N8s(B4iTcͫ@WJrikeKb/>z˩RJѰB›yzhZaPF) oTEAΒoM֥LH@ !B@V[4xBI1eڡi q_4gѠ*CPJRd[# L&eA7B5Q,, m ٸdѴ'-:2$eFbg&Eg p+mP8J(`Pz<>,C4 D*e9(Ukt0Pp]rHŠ@qHH EmCWiAVxZ0hV|<q(M:3e(şH8|̹%ք, %Ed[ Fb2טk;^n@, !Hk4ʯRW`_`P@l,%QDnRB'L2IL@~0 GM% ,6^y͍ ' e ^%PVs\Pd!YS+sz%!>,.&`.f0s@V@ŭ8MѧX( V*2#F`5ߏ"M)F 5ST0rt( U ^ ͽмMolB Z ^ɕE%AŢ>鴎 h&#$bC9~.: B9.W1,i'yj  r%p8tlR&k%C 7̊`RN+{@L&P,F겖(:: | PVWx5bl1Z^EߵV8dp-J9f0 [82 gHO2ꖳJEIbBO?LZnS@5$5Phs .lҥr&[tѹyS fR%Qr2Js˜ӨƵI$s #{[A-PG ,'y=)T1w fZJHj Yayy  xP`e(0mA#5=o3в4r3¥aX,p5@H٧rTOD'UVbJ[o0d-A8&qQ%P\=5o q/<;$IIMg8ӈ hjhR+^ f?Ѳ /h -$ TjҾB@XBJaU aA :%)7$I(国^Y[\oMjp/K:O}/#gA> wᇝ+^}rAHT@!A;~͗3|hrysқ-r@Y4ǓKMNjϛ;lѭ4<>nش{sQ8}XIǫ%@?yC?K2G|>أܙ`vQ>ũ%9u E4:?7_}^6ev=tqy}Xo`b X+{o;^}6}uz_?{Gr]ʀȇE806͗`|$ϩᐢư COߺu9UgK o8Y~SXAFȋ_yLDָ"0vҤA+;;fN7rm 1h %) 쓨[^lnjhiûJ|xW*zܑH> n<ցXUb^Cjuٶr EVgk!zB*懺d FRfӺվ N;^Ly/iOqpZۧqɷԍ1|#C?xiw[ym=u0ӿzof<}%fX#Tȝg;wOb]ؾ"XJ??W~ɷTK|ܨ,G@2+;R¦S !r g682,,\Q=ƷƱ^ X/A#~9[_i;sm# OVO?Y7..ś/g2jNk Fǎ.Fo!rb)9 ~!`yU_ ;5`Bo(,%gbj|ZXp!`ɳ y)``ի?:  v)i'Y F?9*X=R &WT;rj_e\w'.?t~o}x;昈yoϛ퍷F9ꍷ1Zo~gc|7jn۫vP|^k}-,Br0v!`%XKY՗ f)`} a!`Zc,{Ҟ`›ytl=rXԺx-ǽ{sXo9Zv,zua{,5#Z.%d=eB u!` ƅu66 Ʉ DFH(Y5cZXK.%مeG }XGv Uc]KD܆l`u)lǶY5By0yjDGg? rиz1YGG~0b,Lr= p@Y,!`< goTRGڱ?XA;ܩI#OUWQf.F%g;a:Mgq)`W|D^g6ǼA>?߬>7]xE|"\ Q!0맟ϵJotv6Hn%]ՃppٰvӛO=4qnnA!gx.dȝ{;T=tyO P7W dI|v˱j{u_yk@Oxgw'kOWt}ye?|?"z31\ke-ůݼ+gm}vl\zzu[5}FxQC87P{>nN;lv?!~w.ƥ%v{Q#vlxP򨇤3u zszf/t~~U c.vpqɲ;;ΉK$ƈߟmϕ>rvi3OXak~,=:.}'W:t_vpYr#jV]Km7My-a|t5>/-_Q@EDr|^!,u&۲換ȡ4oh#g8$3 -rώHl{8G-&D>^y'Y{P$6=xj&.ڔ2Ye&JRcrDSh[o^m2VWקgeyfp9tUIxw^5d&ݷVKeOs3䃴:) c2ӕةqrq[cUsG3뮪AecτymY8X{3sGSxeiiNIifrDZ15MruqjuLU@DG1ISC61E3]` 2ͤN95Jj4:A^2v-T(V6f5̵X0i-ɑU&=8:qɊhf%dctAtHb+hdHL n c8qAtS3]UW]7G'-6ܦZe<%)9q)P^ᙨ#٣zi"]n?w1[FMY 9zbz>J]iݪwYgIp$)gbfaTG;v̂4'7r$ctPW J %B"!H<#sDmF[;Rbi)'z`ݬ͍ Z%RFAsg EDA}KG5B-1Z*4[2y2kc5so`(5J琒!9k\}#SQ); ТzkV(LS)yJQ1ȵ@i7flaghNR4;:?ϽmHc$ e1 . ߑEvy 0kڬg ZKCRIdGHFKTZ:鼇3Y9$]@jP olQiS`Q`f4VAĆLijMO.TlI jDHдqЇ־ ogx: 4kPRWUf5vRIM]$J -CIOW0QW⬑UPX]u#l"NY O PC -M,J4r=`Ykt ';QS Z2d"M utD!kc]T&B4nAU5^ 9N"w2 Q z n(aGANY҆Z{i識n"hQGl0 NNhs"K=<&G)Gͪzu4 Wc]J֚rb!кs͖C"I`y 0+X̀TF;ʎ Ģa䓳hNŁy4X C;k*' BpdclU!!,̜Eڢ FV{ )iޣɬit@Q@gM!rًT!cxNE'W@( Ae @ X.+H@`bDp~9 ܐ5suUwzE 6(<@P@W PDV"(|ئLgU7¤u (qjVcL!AQ22BKc@ٍZGBW^geUehn$ L&+ῠ 3n2pzE+OTR\"J$Uht0(>{t(k$ؒHik'q+dJ%`?*G")yBo[c}E«(Z5p8S(0tlf* a͜k8X;!nP|a3-M8nǝ~5Sv&uV8dp&%kWt:$N6 g#h wDɊ2Jfpb7s`5$5PJ>RF\[-ve ETġ)gQgudZƌ[Yq1;Ϩƭ[$swk#bF>X>rnjA\CZ4FՀT7a+q,=4X8Jym"HvJ | /\;U-W TB`HOl'ݯHNUYTfd1C*"P@!ta4+]쬃Q@O# ԐhR^HFĿDdY:XSX@@XJ:\%C uK0x@#oz_ɸv3p3N)'ݜZN`zsqGE0ۨMJH6X%s`o2da&dt㷱d߯HٲlmْlvĖ"X|S51{p>I Em&=ī+ {?pMw`(wziGo?MipDwZ!?VߥoӚW;/}ZpשDoG)J1ZcԻX\-.+v?@yoWWu36d-fsE"Svu0n(ItKaDkk}r/4at`єM|(ŅÇR\>sx>H+|ÇһnPȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>|(Cl>,g CQ}ZÇ >C"E bw|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! Pȇz|(N+ć=|(yCCi)|ȇ(|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>|(C! PȇB>^j`>h/@OVH_}4$(O1X $ejR!nC"(|4Kŋ5ys1hc:z瞷DXW8t P簖˅%BLy%+޶L0} TIa!aC,gZRG9Z7 [|8a=Lr #zU/XK jiR-BHKB_i~atDX)EX%] \RFXaded9㝃u%-2ɕDX.="2<)sBăŔaז ^-REWDXGQyKuJarkb둶kB1檖+wH[lV):;NIaX%%RV^ouss(DllK@~p%d<+va)hn:@^xfe|r9ىqWo-O4ƾ~nz11hA bGaR™akJa.kTw%uap[",%LHarD R؅I[ǚ{Ċ-[ͺ.3O4Io f|'\Yӣ?;J'p>]_hZr]znчmF6i4?7?&Ky{, M>T\w`~u3>X\~Zb~TU-&] RF QEvk/.ŗji%t_ RD/}(u7)U򯜺W-sZ\_AZt/ݿj)Ī/ѿrS]i] *y\rx-Z`Di4mrrWKyq_TzN,\w-JK++RtW/]I+Prbu+-ךxVK?HIsY^RcMJg̣nkܕmM0﮴qWqfasl9L\]$K &#.]Cwu]r6#SsT'?]wޜ,`gIqX"wCt嶻PtTk{^I=BH!.~6 w|==HL@LlMu~? EI0uP|2DKa,?S`8֥_P٣WnCI 潞Nl k$まN'wvī{0]b q1Օ1ҍC0S1dE-u2fS?,x|tK&4ؖ%JQ5x@^^ۿE+Mfˇ#<H>WZ:|Ҕ[rF YnQ2X' ϼ/d`,~Gԧ6Ƿon:8HϷ}%r때ACf 'keܻٿ?1t喇x<*~T.uIq*<(;8ykʦ=)qPfKtb%WU1SMfʹ<ά5~|Ћ:Qv~\B x>I.}&}94hMwgmB>ѡB?jY㜳SAiEk< Y U lmL#*tL\v_%к$q^8#0+?g+ m! |cdfURIE# \g])LZ~tÊP3s֭A,(rʇV g`zH~S%=WPTAw'J(huln{~]LzUkj\.B(lJf*tl]8v I^(WeҖW-G~DC?Oa=fGiZ ߏCf:}rj?p#]_c ׇYD.]aOWe ΧWeVç0gٗt eYٝ540kM{+TM-Nz츜'QdAu|0Ey#1(kRKK@w0ɤU.kQn(]]3e/&W 1BL &,#WE_`aum2ߛ:ף`P_u> v 4``zF&ozwC}ҳ+iOe>ݫT7G_FQ^9dX˽,k,>K EZz_]qViȬ7D]v< aX71{5I(:/.5xPӫnig~,幸 OECO+-~/ + nv&_{̃%kѺړaeuP1LW~y6ehڵvQج+"(+e^lA ּ*X_ Ht8KNAAЃad, WF"iD9#RzQ Q=0*E-@46Nx%ח}it5Jj6tjٍgr']Q*}j'Of6MmX^)B> f b!ejz}B {!&Dѵ[B@\pўz2Ȥs` ^WPfFC.QVjֶ >̣w`o׋V95q* ̑=W܎z4Yq9>E$=w|0Fo˼D7w%=P5,fOi\ڣsJ0[6%L,:Yq8)?xﹶ1T.,r9MiP$ )p܋\O$]]q8| !۞T)q(pWfyɵ۾5:t(\ƎwP3O3QD.k\uTDOzD̷wcM3 y +щw@5N.VɜIOk0ϙQu nO^):4#0X+q*f5;L*sʼnt;ޱ _[2U:x 5ĝQ6)5L/&'GF 1]a+;sNb9́ [ /̳sHRXκs99Kx;ex,XSmbd{bhS82gwR%9K!xLo6SX4h_ӟ V[naL'F~wLg|Rťڴqzm#\9'KEtb5?Gǧ~:d/HզI_ Zs>吱S3d'w=K]7,0*K=JPl(@yyꕡTז8!mԅ( 7~8NyRbkޮ 7|!@ohz)JdךݜOWhLޛR+MA3-:iz'LeԴZa)TT5ng7 *  AmW o,7~O'WȰx;ICd2k4ܺtR=ܘ)m+{Enzi9Iɕ,\ $|h*}4kj僡h> tX`S^ZGOP)SG9.qU"*>ʞC+$>5Y$QAZ. 0`]\C1"[? , k-dWkpj?XCZrꔲyyxRӟ.҃T_KoSxڂiq(*2͎n>F4O3\sǶB`DQgnM$8-Sn(썿f$t.H_4R{4QhG n(B)i=]{{YO@c*pEvddfԢsqKx'+R^vj_y*EaT"X p>T)YĴE{*UJ&yf]acAGB)Vej#NqǩbR#|M9Ug~+-]zBNV~P^ծOT@Qɤs Io︋\ xud?e! AɐD9LMm .?McP7(х22/1g;3lrfOfF&` (.S473LΌxf,$e&V<"tXvBIף!L}2jEsDVK{ϩ0ֶa&V=]jMƸƑtdBF^ڇ~FġS #4iϥ'K#FNڶgP ky^ki0fW E2Vk1lI~m"$Fp5bLP| bwy=ei!>~򗁊kE, oւ-LخZ;sK"-BdMܼs1VjeP૯₢QM 0!TD4D HJC1r'2˕(Tya@j] ޴îrx|"  a%*Hp\;JA QI L][SeкC9kI֕(J6n%TaNc>Y!UOFM'`hb,jm*ff;.?W].sжUɮg?"Auj*+2j JqRsawIR%k.B$ϓ׬AC <%eiV$ٷxyMԲ@ՁӒ,mk:I I/7M%Yfeg2v?ynAwA UlAnWk+k"V󫖇(]|i60Nvп}= ^Ox>6ΆFeOKPғuTWzKUf|4Q~<-; ds YeM| ~(1ˡ3n 7 ؍Ŀ uwcW?NԋUuA.z,Ƴ\+ZQ fR+P؛^D+-J9xa]WDM;r~aF3h}\ufT _^LmbՎgJlica.,G!Vk!-A#-TRD`RQgZ/%Iko3KB@uJ ShЈ!i69J\jLWVt·t^W:ӏu+6ߛFȷ8P;7b_=% 7VX~46%dhW] eo䜶.(rڡ^:뻢cm\F^/{0V? i 2QZAX%$1JT#DMa0FXC9p؊L\zު-ZUn*oHSxg.)xoTXr.KIdjj72q8^n)طKNƻt`lcRFgU >:z4ya;@}_-TVjjy<"~΅i SI֊D;)(a!M12!c )SΰVPy%sAX[@eQ*ADZ{0 9@m[NÐ̱B۪d|_RRFB#D Z"(PHqŤ0i8P2!AF h ȋ`K|H %+XLj3A LY55! xm8$!6#ok.FKҴ% }?A XW5Ng(FU:DžS熆+,Y^A!ٚ{vh6 c{A-Flm4:2ԫ#T,6T.CbP{=T쾁&!LOyD*{ʗZT7.禮F@lw=4k] "HMp2o;F;~j7ojD1zM~ Vb_~Z)Xe1c`^'!$T"HK@hHYKA M`U5kݷlMQy˨ t3-Ei]%M};HGotoyh*G3kp!k+zTy;1slDl)G2.ύtHKdl2%dVJ9VH{𓚟#; %"+bbI4P54`Ti @@F*͡ Lrlrr夠,K ݔSxI :<6̜rp~oW;M}9mGi&Ԍ#hr=4j-䴗"NsHOKvĄ9ޓCB#iZdO\غeˊ^31AQG,* @Fyǎ 7t=QzwLT),<E =b]R#(> &D,ӆ^M텛h8h0~L:u `b&eְg ԥUvort2!O ?V3;D*aI#pASˢhRchӚUIv|BElI"{iԶj^SuM-k`-ʀBdj=\߹0҈XGk\!R"`6Yė JdcAZF6B .jKJV;G Πa㍴7!o(fj|֐{4L_A"iӢp'I. 8r*Bڸ.xD 磄@_5+֫‹M+n /P!\Ĕ,&+<~.ZrY^^Ōq#y<&3,'j6{,53hk$ە9m5luܡLmm㖷Ѽ+X̯^ftq;LWUòf^&sXɥH` +9!.cDЋuTexk-L_giMӮC&[鹟u*B*?o0L#Mhhab7GjσWL]jvͳƓ<91'Agrc44v_H9 >r}}1z =޺ra&yȥ7O,^t ݚh# 37_iUh]]삆`06dߖ_ޡ_}ͦh>x;"O<6k$Qjh*BR]AOݖǍma ..p@0ӞxFR+wbwKjjg[qIF]MU_=H"o|D!3RxkBącHk4cD%QE$ jRhG`π-\$2KDQ"Mꨭ$$h`MLp(cB*;\rq꧞`\%M~rǸguMJ44VB_&΄Jqa5' ) 轔L% J3:8Ò3% ADy₞p#{dϞ9dP7$䆎L֟q=8+ ޾l"x0q)舜cq% *x7g_rd7cD^RkWJisťW+q%943;B;w+#jr]%! LvrR/0O0k5@ȡXU}NVcӪ:y!"yF~\8 jpN'Q}xQ %E d􁑶Z4R!x$e.e)NCu@N3t>)TZiqfaݳargԚD} 8Y K\$}Ҟ. Sǐ6kw8?p?[߄t?6y:N+[A;Xyqv=g]S_7ٻicp ghV4|EҍzİI;'YF+\^qC oݠ0h?)m.UrehdcBkrÐGtmgɁǸ8PWVbwTQ(T+ 򎝉 Aڝ !%y|4~ps>;8ap"w_ϫqįӠ}2#Ee^^^Ի-^C׭'B\^H^Ujq7j>o6&ͳ,5@޴R8IJAV9d*by\IF?[T:+`ƊgտyVo$7qҵ^es{ Xhk|,4"o{VGFwmW`h1 !V A g{Q<{YXϾn; `8yAgǖtꦭQ<[|.@ X"&%Xo*34N>֋P;y bs=c=Z| &r 4)hNA_h_Fp;A.f'pҊۨ1XML\Xv1~VPzILn~qy~8z[Rx4Rtp0ߞ:楧+ż.Ȫݬ!O` 15I1 y!@b2 iR‚!@O$0})E k;muw뇅/?,}sq(32&HF%ǒx#g' ZdR$^c JEHm>|@`g|v^@_@Z۔wLd'!]t 1i 6JkiSFxb`$,e%`{ -vxZ^4iMSӞ]nx[[~ʛz[;.4BPH$;˜ݦ|;×,v6%e>H:=QUCZMUm r7JSsR*cl?O[ǪxRKu[v1$@ xL 912O檑}b&}fAbF9ZTpBp9;ϐ"8kvے7[-#[Bh^щwr6 'HT3$`L#9%Fv7-y2򞬥RkI>6s"Qq+ef$S"kKk'k.7b+;1` dCAJK;ذX7hij&Ny]d?gkZOf';E/{E(A m@Ĵdj#u ݰk֨T.*$=s˴DY pr1c6Z,/#rNcyŇy~kK bZ 3`'A}T2JkH>sy_>+9+o;]N2'!LH!fٲHG~nyO lNf8nrpHIޠD Xc1ʅQg.`WF+"r=rX%LRW I#jtHUqs̻z91i2c]|;{7, A1st'\_Wwד~z͖jqgkRb3%kq׼9O昍=hf wM-axIxW#Z-K 氰Jp,Q&bc-dz4^^Z ~* VH%<9oY0H U+'o\ve=٬xkx{Oa+8וXyQa|ksՒZ\-#Tg4—',/J,7l.y2l9keqw 0*XDS1I|v跈'S5#qDŽG&cSưhOUҢLNyEmoﶣE}Y, x@7t~]J^j"~L5xĂCtr\(:8c|a0!4eT2!ǖJasjgNǽ[-#\eF/o7+<HGQ :hx[Rn' RĕMB*  E (R-Ts^ޖmy_ފ0vլ:@ktUR KGM[GQDޗfͮ g7R{8sEϸ`P:ނ9c>l]fC%6 os԰LI'h9ds\Du8ugdv_yO6Y^ij^҅@H GM+Pg$P!QW'[VHe|[\ 7W m\)!.50ńj2bA6W G<2E}K ĥv`I1h'bF+kn]Eu25ޗC23$m&M6m]˒$,DQ mP4[[w h(Q&t_`e+[%O1tU즯v垺wo`1#-f qv߈ˁ8&48Y+Z =m5e. 1 j^Vh>0ҠI(KjX`*NhSVF<=P# S=0m|w:<@бKVɨ^=2t@2?f##AP5eedr ) DДB?y t7GE>R͂ߗ"!é% LahYU3ƒ)5%QcU.]NYVYV%v̪"EU "3EIS`urZy{G*WNvȥPL XR>C b]<׵QcU+RAF -4)4{Oû$Q}dWx9ӡ_ 8{l V ?rK ?\'[RX&u/PQ~M?CivѨ0Q|t!6%,ٹE (.x~/!Dekʑm>>G7E>R嚙i"?5N6l$NUP?䂹K쑑󽢝0"k'Tw:k7Kaa#ڎ 9y܎ S]+7.a!y }gZ=*. C^c? q5 iY%\=mPtp7?wr@d+(%`g%1KC>T6B Cܷ~z:!xK30bdÈ"d8{{bJ~X=G/i,N>*!g X 2HxYWgu ;?Qu0=22O=m+tꑑo_/P9UànWן !uۛqqu! ?"ȷ+bͩ-\ L΀iIB!x,_f-e>7GFUـ3Z]/g23[`}RiUe0shSF|!X xkN7Y_%9 <TE۷ e0peh. =ay] iTVF2 _\PC /TRCO6꺌s$8+ Vrrv[*taǤVta/RZI2*rRaE:D_Cft Hv"eA24QQ'DX{<؝(evZvoaB9Qg0'[!svFS:kS[HJ㌄iVyiȁi!z(J,efU'ۉ3NEeRl9P)*>!ٶ*$lwCXt#3$;\"f+K}H p\3/`ɕ:sMP,zOx"IK3̠~ʔ/]N+v|쒎!INZ}B4ҡ e!If|% nIAQ"P+--x, qЂRpXx$ q$JMoo4.5mB4cKM׸GӷzFmyG`J 1vD׸ˈJ @ thCO 0gOڶ;;tD8vg/p)%H7~Cރ|éy<0@W 8)r1ko_RBM\}a\\st sV]*H^z$JJ[LIs1OwӂP#>w"a$4QaqX2|SkI&iC7чnK*Xl{enW*8b*@LkJ뽤^UfsA`RMsH[|sn[S*XNX T ?hvء:zdd)5&Ƕ-Uov[u9ôfmښ 9ϰ;%AJumJAXeXNS|UjeA^q!r;ʟY uuLh epueӲ(4L׏Q#U.֪>}i.9\M=bӨ [^b+4A(%1X].8n|',CZ+;|6CC[Oj1/CJ^PE3'ҰLwj4Uh8{m&lяD7& ^TQmwd1ч4 h1 x(v29@=L g HۂQF5͸鎶^r ?t^R\W'7+葑-\kn?%sy *[:0r>IAtk@D gb\GSrqFY l8=LN'Q(Bq ֖C<ߵPcUm̄M[+x5|Q;u9t]wz7 94+,lWPF =2{ޡǪ\sV\l*L3 T$ĊX&d {,GrX' =`^oif?4ZR:R-(QIU1)mQK<ݽBUfb>]ʿ,voAy5/PKsr2ZM(gN͖ ܮ퍷YG Wcjp`ZDH8iAzd1nE>V&e-ԭb2Ͷl".QKȊYX*^ WC: Xt{ȷ_z GF6T{U0H0 b=22NS}\(ʭEQ_>%b%kp.iu ;^U6TgBC Yr#u"D8B$$h uRIe4/r q#U.لCs^(͈1 El>V{VUg}(T,5?~rZhVѽW/1bmDU>0`ktS;m!}'Hy["fC'~ )p 3|WonZ/ICl}ػdvn2;| o~ \_FpLXo 1`V/UѶxQ:bٛƩ;ڪTW(jYH@"n(ڵ]TͰ3LtQ݇ͷj"XkAjGi0¤A qP9GG8*w&]wbPXg\0M/w7Lpr8HQ>j~ऋkf񚍬^y%pHi,-4}1}1|^tQӡe<3]ńn'J9qX 5c\jk).?3r'Mo>0p=224prč l w@2`vrBf,‘%ߖ)?VN/t |M@q#t7nHe+U}\n0q/N3Gve΁jVvΑa2E>9Q>yo9.m h(GFfT^?`b=22tׇӯxKsfHk+w6HzdTQ9|]:ș_8D8'7C<+-B_[ bAbN[y%9c|>~ĞÐo<IXƴ'=2~{~{(Q*wNC`/=A:\b=Wр N#.`2^\Se|㤟 |MWX</FT>o3x@\/6hluq٤G֊5uv/~շl( IGTPÿu,19FpjEaF#ٟ͗s]_fxy/__~K/P=-z?yK\TdHiw'@JIǼQ;(w,潒>;ZSQroX>?V=ko#7e03l`;\p|X\p݇ 65%,nu˒ۮD< frUEwa.,zchs>$:'Q&|^Ldu-cL]+tC% *dH)$S$&-]mSҫjoҙi+f'SkOjZ#O ZyzƸdq]AÅƿCB櫜#%[jjxXc/:a;moq cL& 7 '2{geae{m"{EGz$IW}w?wF[}_}?EYUè1,EdUZUEE@=4ڻcnrr\?>ɶN0c +W:x r0nh sYh{z'jj>uayc/p;}y3^Hy {!2O0Xy;TCmIߒ~TݪgfncA`z`$lF~ЍF:eTIm;0TLCk:'w g[$lӢQoA9 92"NI l@.5^JnzV֤ zWGB#eDXak9 LkrUoɠyWJ3n~lhPv  ꅱw*%7)1,C:[05lpwYJ(R `ܷ@"\!0l`BSWӠWR&_ *1N>ZײaϲoCrЕbk0H 1Q)Vbu%*}7fVǛ{z؈/VF5hb>eogr^0nj[=%Ntyf?O⭰11wP6<|<87,sAVъ` GQ"mSpIU)zٟ] ⊺F<9"_NG"1Ua7"VF)Q/˲L:cejnOb~BHNK-&y|ߊ{]btH#(9+1@ʬ F VP V-!R-{'>DV-]GC~Ɏʛ)Z]v iBs먁SnN!fDp;Koݚ3.cPb˅]#ߦ\_*F BJR\L4Xo\(+[hd<ӈ24{\]ʸ': B%DIh MY ;q2?nm; a60Of-jH?(*x LLqЍC[y8ISzt:,$Muy0r>_'/koijԢHu4?N!os *DL=fݩ-άGy"Il/Z˭Y׃TO| Ed L:K_K⭜YG b]-u>5.P4Qě`=zkXrkcoSec^5q|3U?pG?]m7\dPLl&<;cw@8ə#gAK$>93Tj&Hΐi[Il~`ah^ oqx8_#Bʆ5pΐ REE L4+ahJ+r1m)0DQ$>95' '#%;'h"W>DWq8B1%*4f25!zt7{b0>"sSz2^:+8GMt[/FZ k b ׊N"n(W58yq`>&RhM#=09Nb@W r,č>=/Tv ̴.:H@&=ext ~d;_y@smHfHc).C#1s|l/W`wFbh5sg'8X%[ iչQϳ値FKMa1H0RCFyyq9}A(AЀ|A.Vc #]zh$f\ƆE+}O |FwH3 Gd&fݻn& FZȂ]9$F,Xfi22äESE443 ve rXe=<1G.&7~4w  c \7h (?/ـp8Ɛ/it}a PGA=43GϜl<鶀-6)Bb@$1Hp}@^Q̹[2&TH̜R MCD^B)>\ ,#2 ciJK(kb\nAڙ4m&Qd'Qjap0b"&3JRWhUv)N D?%Z籩SB(O/&5uA+\2$E%t8sċ*sYҒcw :O䓩-ɯ@켯JQU*Au)$%^ϔ_>B/z'EN`u,cKQ!ץhUU.FGFcHYaK%-]X2V % ]~*ME2ZJ5YC7҄k zM)7D[ۂq#3Vf >5t_RE/.6^UHR*+1ApbiGxYЏ8=ҍ'4ᘦfwH}P")=7 87)mQgA4Ȃ',($p8<Ǐw_{ ݈B!joopckk\T#AV g$Xږ8FL8뭓:m4o+}Fdܭ@}"'D$W%~AOS+*;ApwFU6ØVśM5Y|:/e 5bPh1.692O`۞*78\|NP. $qˑ@%IXLNB %^'K0I1 HP@4WV0r=&ȯ#q/SaIcjƍ"jtW.t܅N =y" A3'(ņyA[j3nrjFzDAPE=L=| 3j X%w@fS} >5t#EV `1g P[^P7>#/cO|s/ef{gRw!Ѹ|kJ1wTj/K$cy )&wC,,z`De"QKGR ǀr> eUh׊Z %`@V+RsA#f}Yx%Ps t_8qQJEԕaixbPU)f9Z(",K=( L)پ8[GФGWw/{Ovg ;S>Q=C(p3z7W OdBgx~#ؓb6ffnV;4l=YTfzDzu2{\oq,P{6afRm*fOkR[kaV0N?kpTOoԬ?/urﶟ6w4ڟ?&Qk!??uq_9S}u,>$U.>z~f:˯VeוWUQ}ڞ%L]EN6WҺ3ش)-{w?_~i&0ۤ;z-~lAr?|"ɋv쨤pJ^W0"(V̸U4'xż]>hd}n*hban}Q8g]Q\^ "liKkY 7(JOE0Bpe,J>&OmG^NgQ5QKef[T-rX|tƭgm/[ uo,>h~֯jx|~}:m>wifAb]ӿ.Esnp}vwn0׆v6ۚ@q@Nx)~.ᦝkeY0(ܮYoyޯ>1ݏ!G~0I5><(#{nl~Y'phVKK%UІ48ׂJ)\J(lLEA^TSU]S[%VE>`eb#eH898q‘iInye)*=KR!(6"מgJ/`lgmHrش{ݽ`vmS$MAUϐ"VLPl@ȚꚮO4FHJxdeu4r U>UW~T?7UU 1aY}U#Lz+2xMd4rRҜi %}PT{交בcңY c/|A:D ropTԁ :hبH=U  &Tx7jԕ /K }7yvHP5j#,qL3[-{&q2!$ { :`ˎqL y^tQtW@fmkƲBq5v2MUM 턪W+8^N30+?G_3@gSawQ[i~mqwydtj#˻ L$ڼ6 ױZ~7hv1e v1j\l_\^|b0TDs[ H鰴U cb΅`ĄJa(LtpNw>Ы-[JK[k2z)E(*x;C? x&4R8isX'֎B,y4?^xb ]+#yP= 7FyZ .O:$L9|a_{Ѩ.48rTL`mR5%s9_;J9"KD|Č aO-/ٶ}*>zBgް/^~y1rSmY619U(_*iQ&DKƢݸqV磕W%%%k{RP:6W-[\wCHump89M .Nsĺ{x ߖE %㭹Y\|KMFnCF4 Lo$v ."A5Cy} hxkdN;7q9/s& ?==kG 7#=8d#a; 09⹊h?mC(j{Z##pK)pqx4y3LNakAFm\!}Ȑye`GP#"\wys?P5 haap؟D3&uUtIݕv%&lTn tdaw3mUqBLؿH{}Æl]]PHQJ`ч! ~ *ݠ`6ݠ8ݠ&AXu^e6!vpMwRч)F:ߓ8 CҚĨ"s q!RiΜ܍k əG$S:h5x6K۩6s@#lbGɤ:~\t;yЄ +ǐ5#QQx`/>>ބZsRc^Jsc.btWIIt~8Vѣ谑N'K`m5] 97R˓n._Xbe廍%dI8!1{/)A?)Od+0x"?D6 Е( LHn1q{[ 6/xX>qaaEl8rc۠}TZlЇ1:Ryb$EFϴa2?Fz΂Zv2zQ9Hm\l8/p};W ֡),E&˗bEŧY[  -Le{&Xc%R< Z rւwOcR VJϑItXRm: D-[2 bù-(K -hX m!։Zn$jcU;tՔm _z|gx*0'Yjr7:rqSjK6wчRULDPƨ ]w#E&&&67 U u'Rhmץ"ESw7RsWk/Ԝ.jmt^9x"yl6q"H59 2Q;͵`=U~Z^@iqLS\\1Zn%7ݹQRLC_rGbcO ?`\ ÏZ>۹+Su#bk;^NȢƏ {9^J˼7t)nT"`$XGqI1 %m˾ƂkZթ%; Wu֓`iUDGęsD!uuy6ʻ KjCF/zUx`3%ְT!bb=#i=uK0@%SjAϡ\Q#4aj*)cN7 | [L7d*F:M!ɔߡo3 {@h2OQ::Q^y-,;%5vhno5 㦞cd;DF)릥=mZ0J빖63c4LϢ)>5\3ڠp}? ?a:hAxـ6t\[4ޙ?( xO`\ .*za$4=.7?4͇w \)( >o#>; T/oG!od⑉.=-hGܣ&\/&9}:8 ~E驦y>D}g3G} u5ʳxϤRQK_FΤ̜'lu0!]`{&3D!sJIB h1j%㴆q Y]^6Y wWߗ'Нg&7aƻHdyi1fa+S·cc* *Ǯ}s.kzqYcпӅaur'yNݎx,Ae\T^XCw>CMWT^rl8.&* J͝TV{\v+{^݌ڡEFLZCwd!vhl,M!+zg/V\0K ?Y ye>ln$?ݼME3Ɨ W[;T= `/-Vә 8+.q6"NmoVh\͛\uma4Ub%MJ`$t5FX"aD'~*l0`B|ÑW&.~?hgv<ZY|_qE\W}XG>~?p;7;}ǀَb8>E̴݂[L > b<\+Wmߡ8p\߮cxMzM7v43+^c I3!,F{3](nʖ;NP'ch vUwV{-+gC4 VȈ"l+@|[I'"WAs\V+lj`rjd*&/ǼzyYUB)0m?B^ie: 1eSѨHsIGƔa[&45"fm{&'f~~n2lI*Xaˏ,Y[d^}2e` xgUz[2ԏ.9`V pz. ZLxaU<,KlvjXsͷ)f n^_nX@C[! WQhr}g-h`FYNw?l#z]6GjfkxoorR Buy̹ uV0҇HX}&)6LMH hEBa m ?-v`dn#MNr}̄K!y"tZXk$gv;*bƄԑgAS5f-=q`yMoP> XIEdD`_%NXQJyQ (DZITGE8I8M  F x7Xqg Q9'cÕX% /踺^Fڱ] ϝ04R?Lp~ W\ E4.Z:kCLv1+UC R4J7:>DZuNϛcəWD`T2p(4AdI[Xu0ǃYLqA~hA]Χ_|jk- B{e6y 󷃭0y~bn 4#ϲ@WGxaDC^aȖe|] 7+q1"d/m8`qG44ۇ+v5u H"_Uv0w^^O'+ym~*!ƉQX f^G _ H.55Ö-K8W29-ZB/o痋^oZ;CY}T ~h-3kn8agK^ߣ8cDIFJ\tD6sd6B!'f.f4> 6Ђz839^:J~I6Qmu`Eu=Ʉe &ip9߼$dgԩN+;tW8|5 |$gEyigPCXqOJ[ׇ_tK/QigV?)?Mׯ/Ϸ?˯߼%T}o|%̫΋!裇Q0< _;j4[f?N` /ڜrݿo5`GP ӎ𦻪th^>aVivh)`z[YP1c,4IiqVqUY.WnmS2KRjS@.Cx|0n?&Hh>w ]v"^!?abPyAȊ9uM5EpstDF"TeWyeg/arGa1rU71ägV4 ;^,82hd"Sf:?A@IE^I UOrkoC>͓dQy:|~_n{8;HKLK*LzU3UO\ CŘrcd`Y9-K-UU2ǩo$)h$⽣9Z֢:0˄$H)^ AژW"Tn27h>FlxyƳղιQazs9:jԚ_y~|,q>s橲r7Yfdwv3rvt$wx?gLvj|]yZM62';m1#ŤJ@c)z)`d~Us׹Ծ7#A7 .J͇e{YdU ~`S $E/)Bܦ4Ck|5]ev; w#Zۯ-#.:7J^8vV+]|,HpVyW*GËWuYsA`E[ݣw[7hyTZ4Bm} w[ V7ƃ)aOzu.bPwj^X jΚNdzAnkM=HQ@@0?ft^^ `5>Mn0̱k㿾KY~` kJ4"bv0 HK朡6RfvueLeLIR?p=S{q]IMx2SxE|뀍(fݢ CZQ׭J9\HdVpv-f9\.i$Mo5}]0rKDMNf'9FZR(& h~}EUTbIƒ}YͲm 褱 yYc0 la6>Ԯ ѕZqv }&8=')f+d. Jܓ\ݹ a Vu#1{/st%U TA!1t&1sIDu8Yz_?juezz%K z0ԔѦcfzy_G> _u4q)s,hƵOKz_4>O&kŌHcQ`hLаb }14OJDZSi N A0=e(և# Ok$H}`FkF n[%Jp2+-q-WT%zԊ`HD' @jE~&Nb;+{U",aQ^0N'5Y1)3H ¼r8Ps[/H>_Cy%\A/o >$\@*.HM7"dupH̹k,ο_00 qrgA[@FaŊUkgTOQ¥˔l&ySrWbz&m°Zhb52?0\sMajϨ5f8ыA3a IT&ii0+ª̶8 WQbz@/R .x0vAUiD_u:цyRm/ǒϡ! 4Ųivsl ޛu{Ht6:pV%}vvpN%J=úU"ץ3-@i; iJ@`BMPpӣl\˱/Q6YtyһhɈ! s \TҜ!GFpD@&qy Nz:+YIRW@qc"%_j{( r{o|JmTENN ]JL{tQrha{ +Fqs"Qq+%e.ɔh-j?5#vƍik'pvfIFӆ4qV'贬u@ܚ^dhV1SlY$,d$ d6utN9JWG-`Ab2,K_R4Âi4e9\.ro!~io^ϡ$ϡ$š$a%Kڏxw7XtF[lY*# 8he< us|fA3PrGp,v]ҙ®[Z eWʧ%_;gĿ1JLFfXhLdA#54(uer? a]8sx}3=c|?`Y9`.>*捗*|/cG3$?>#XCE 87 x&l>.ehH>3؁EAc)Dc.lx*~ynΧd!o AwRn1sm9!L9sT^r 8i6?'mQCgNy>+?g3'xL7 ,r1& @3ϜC@--휏88Dž..IIg5l9b*DLgġeO?{ WSKuvC"V3so$)!OCIRxl=Abd)X 8.Z߷EI~bX9 o93w@D jWb#~u(bHӂu+|<%| i["Ll>IL\-C&EhlM`3oVзO09sX" NOAcVY춨p(4$吤`Xj"bv5->[,p_`RR1k֦wc ]3}\%+gӳuy/ʦ[da IRU NnamL`Zt;K8c, j.j f3j1sl>dlu\Ųs> f?"7[~rF-3HZ&ńc+xnbsNddnFvWq5,&$ δDZ$!byBP~ksCSYK['\# W\f󧅸elD])V dY0zFp+«^;f@ ZlZ0} >PFEJ1Ӽ9X<XS-8ޥ IQ~3gS>dXeɛEE*)DG-֚B* ˗*JHFRyoZSHf(gks׮y`N#]ucѦrD4 2ր$r"4hˍN;9ĘM?J|}v:}(#U6yD7٪Hn2@'Wqܦgwі]rU@cPv2bw1S^{yZ:"N,,3J3S. :0zFN`٤~d!^(tȄ)AJGE:X _hs 'X-Ӄr\v)`UFwwxֱ 1:~U^~- ½٧;6=#nvRWRJ;kZm3rx0L$W7sAP P'Fӈ(e>CJJ@{ g0idC! B$UbaF_$򕢑 ) x ?nadĘv5(4A` Ia)P0^9 *Ů=A8)D+ǙwݬNh67^z(C0*6 _ T:(~k0PtwZ"' 3i.Ds񂆷Tb'>vX{#[`􌜐:)3?vij#`_FL"$N3$<\ aNAWt$R}\RmѨf уTYwH{s:$I8J|pr&p݆}hg|H*qpť;6XCQ>a rzHxt!830{V@! kck۱g0zFgߪ>{O39殈-0ENCCIyӡ P*HcC S)}Zj"98,tz;puyq+0-am"mփM4a8=)Eb8e&L! rѐMݐn '6yVe Ls¤|elTV=\xg԰\Pe%BT.oBܴ'TIp$R~x9;NgFı%ˋF4 [`2Īj̍7p2FC]\kUq3 |mPoe1{E1.jA-i%ۼV"eR&yi( Gy%ꊦD""4IS,=-ٟ S$x{0:0sYyb.\ xťΛtޯxW N?hY^9Fz!d:`@ I(BL#N[!Lq# ~ի+Xy'60̮yun9SM`}޻lz9ʛ[pp_o^̦0r3¾c.լjwX{6g5x݇x@b@i6KC窐&  ΂kk8@,U>-/~tfaAMtW^5G;;@ۉ /FiUX],lf]Wˋt>DVlN&. Фd:R4H K DI :0h~63/fRW3}B0 @֣͠/=n~R}0~_ɢz S`c0F_6p?Z87B,w8F7 4~=Pȼ4n~3?@~|QW1k~Mf?xy>8F`r&OCI N -qaގʹrlL͠vgIf@R]R/L Km-Eub޼/gPY|?%ش_nrHS~]TDX\.`w3\| vҥYscvd.Bmm>\=܂6Llr=:^mohپП)T3&Ӣ_F{v oeqVL.V C&l˶PY1Iݬc?+,ɲ#iMD8 R#%N {$<ƈHQ")i0v7FdA;gg,-KdtgY F<:g&_x̀n1[ek%k_d }a/ @} "EB4 D2Tb)P7HY-8F{xs,G_L~>i;ʴf5{5ұ'eN1Mot`lf.Iy٩}]8_*[x uzmwKD JJ.RI 0 E %$˖tpB$i R4 U~a$ ScJXhץ֖ƾ4`CQ̣ ~ y }Q0bP[`l3X䥮Sc6vdeKn'EXHۅJ]  W-t|7= ڕ֥-Z^,Բv8x)cЂmyF86j.(N'kw924Z4\6]aYS k x?>䏆KvZ&z6N~҃ߌ*y {ZQ*?/#fxZ_?5APij{K~̂Ml{ԛn5?5KaIX5VoOr=l5q~;ۇNV߅iHQ}!xvQ/a}iϫ1}ZՄme=#tTp6l, zFV`gkesɵ2Sϵ2\"jeͩGH\q?qeE\ !zWߡ Jr1}.ʬ.*R깋+j4"tx1Y11;ޅ&c$O">!1?1$!V*ŏf;=)jЉ%Jglg'g7~;^m礇 GlXԜcSDhN;,:\!eGC\KsomslbM{fQ*F LKע.G4ndRt]fF ^fZwvg6V_0?") dR"%0azܔHg63Oyy//T U'S_R-F ea[)夝.'BVdw[> #v0ySV+OS0+dkdF*~DV[@V3FsEkF0.0"Y4;2v,Ȓ#ɯ o[;.6؝~yd9|VRnP+* 1􍠮Hі&NR׊>E PVsfw$%WaWWҌ.Xu ,Oz~sPdBJl~eyvBlHr, 7lަyqBOnLs2ws i cv%]v kMODF=؎*%#?1_=*cDkLІwjc-+B{n> .jӾ)&z& RLmt%&gelFa,Ħ@r9CrlTUUZ 1ؘ@OO:"F]V3K^v>'G,oQϞ;6!*TXjn4Xqv["VxزS3~k!Ѐ?xgGεu}K|u[2!ؼe'gL0+W ."f nMW}D.'JTxU|%oB.ÞF#Cl- kRbQm,Lh'U*30~4Lh2JJܕԒ6jM}=L;V<;V-K_!mj,̈ބ,)32JQX0We4CN#-0H>6r# AwE]@ Meؖ~>bax H ң( E鵵s(HQHr-ظpc]"2R&2g$#d'ib&CH G,l=y,,[jDdW f\?`axZgކPUrJi@ MJFk*;~ y$ŀ\ UTtd(&XC( G,ײַ@(*_$jɧ\d,ϼc[ΎӨR6z"/Cx) X{a3 [RZXy-̼s;U34ܓ[ 'pPB56W>*}w0!V{D.E!^5Pl 1 ߰ YA{׸YME!&WcLpͦ-<0#3|i) kB^ք`gi IO#,0zz7Z*YP^M)Dt,i?xcDQʑBˏX(Sji-R64erZh 3;[ZۚDLKՋE 3)I:%fgϳar ,%\ DDN&שxN_vNhT+#Zp.B)sjaAXt\@;?/K'uO}GM]HG3wѫX.zmg%|.$u鱚TCn@6N5@i)V/[?VbӏKHFX iTTƒ7-$ 3f(3`4N)g8p- #ϓH]. I 0k5baxe 2B}U+ԞZF,oVK-9~),Zy~ 4"g« V R#gJjȁX WT B쳍YgH k|G,L=l^0䤤i2Jv WKXx#&OO6U/J~Q^?T%h6L׳|ZZ>礤Җ QT1\`x#f1;_nPYZUJց)xȁXZxh !E MZ_MjU[؀8baxդܨxf˧e& @(ɯ$TxħcK}d/MDJSMȸ*q19WjՋյ=$͔2O/%ۄhuhIw23頱d:imy!*f´$bS~dҷ, HB$D5az!A(A=hKc}dbH7!K/$(왂zN#&d&2%cʋja13},9{8-֏> /y=;f9&Up>ukCr:T(0d|a :E xȘ",Y¹ҞN5Ktw.)" w%[ξQ&,-'9&K jbkǩhٙoj%#{^gj9@+UB\+7/ M[ؾ*?gJ h~qqHA;/^QpJj ./ʴ^γ.&m"]%MEBJj߳ya_89~m4H lˢK߯ADؿv| 2"@Ra⦀#(1[=O_±Iؾޕ7ڍ)#i;BwԳ`2{q[}?޴- Ob:|c{5o _=VcB|7 _u׳7G~4,L1$*xW?k*|իA]'%;Ϟ}Vt7ef_9F]ߩ9bk;5+z+^͝A}sSS읚Ks ޺us$]9R-wp3tu_11fϻz@;gxtJ*3_j"оhEO*E=Z{Q[;Ȝ{?q{FLL=[CfGaDyJEGgg]*^W?ӗ"v:_?[5f-/R*?g\os]o6tI?GrWVoC w~MdVwͽmV}?h/]'o.KYm׽4'ҵNԳ^Ძ-kwA2zu^ f'Y-GK]+47i?ںgg6BBC9d ;y ,b}Jc8s>sFuLE"}V}Uwu&E%I.xq#%S869ùEBq iSV:͋4daԋ4mȯQ': EXǭi#)؆lҒt:閘ryYnm'=J^YIoC8XVm}NxVWf61,iqdw_>^wPkj+ݹ[&pwQ,AG/+$U!]ӵDD eHi@յ l [u8zlnq6F[lO4'ӛNaRor5wu3^i`MPT[?U@κLtݜK w:6=&Eb;YJ(;Y=G܍4Pǁ:^WFMR̚tl> ܎ 3KFK!ʙ.ސL^/ iyЦ Qyf?^b\NƷǙON+qu+KD[F>"8T(ۈxt0 hMyvlpQ1Zc9i#s@z' `n,ĕ  GtF F IC;'-f_Qp&ͶvwI`dEUgwW ɼo1=Q=qrQR:ABp`ya B{s߇}-à]{jWSԶ2vl:V/M܏fg;2w'ƣ_z B4H"Hl,8XT GV h@/5N* Wa2@Hy@Tpd~K(L։/lVt kc}h؅%At Ny m-S@1H-^ ʼÒs0]qбEAM4.5Ik(E۬|JҪ*C׽}xıYkysIү˭=6 ~.$'U# -y=| J[)to'18%w7:b 7qy nSLuxwt^P]43e5Elc:^;f#hzMqYu +,.*GPpIVK1Em7ҕ#n+%%huh V)vS6rbPAzQ!́)x Z8pZL`!(O2!TLE+1H( VUgO*鿽b>20PT>9L*!f(  c6X9(gh앉1FCCV6|+78P9$M$x%c 8dQ색b0{_xPMN;di;~(HO{UuqtLrQ~^Lr:jr,8x(E2d3VNkf!.& !8!NKi*\B 4/SW'^UhIK-60W1ɍ1ȺzyR*v%F>]1KI/b2]&qp3ҷq'D}{A7s*h0? ͘@mpF@@_ h-AGn|V#"Iff!Hk,8TLj#fj~F+}V {n#&ׅ$?b9 IZ!xՀUbyu'0l>*ׇ(/ yW?j֒S0Gi gv9j-Zo>B p5ߓHJw>5.(.ϛ]\Ln+F@_VQڪG{,eUeF1w41R ,7 _)#w\R' .c'If"|h×ljJZU~%ha-T.W*(οiJiIf0;^sr9mi:gf [o7?/^,/7*XS3yT[}x BR\^|C!U"%1^YW1ig1TȇbR`D&KVeFK,KUjg[b)u.+ |05lvp6n3jT&+l// j4w/>wvjYڽ[@E0 ?F%ZFn6gu6\LR蒦 ߫&uFSP_݋߿;Dw?y/0.Vu:n]|5} ESŮ.Z#gWO&wn\wKQ0l ګ>m֥&6{;EGoq.UL+Ę1 4lQsMq,J fDB>O'o:&gf~d21Fc7X y"&JXǀZ Ĩ?M$"%rr1~WO&wq7CS!J 5`͐ndy-撀bCNE$ez9!zqFΡ&m9G1tYhn1Vf)\6O@S[+dJ*g7Az![YL嘤??\S~'?e#7Gv w^_'FoR@jBW-w[qvdeVPTJf+JYy:c+`UNϾ[ƒҁ$S@.J۝TtwJk Hw‰lFveVʯhMy{]@=P۬ LfV+l<ΪR. 8<@/@yNgQX-^;^C|m$>ȥ.>fPhϕW$gړHAsG>Rkⰷ>y`>n+O79`p噊{GKB4m@"p18"Qbt$c-gl=X׎remYL69%;J5r![-UdNC- ]՟}(ͩ,:̈́ty5^tgm[v??Mk^‹ōhm͘vYZJ2KUm:i](]Gc#tO ug X 21n񡋓@ډÚhH08|Ce e<E `E$ȹ} hW.%OpǽdBEH̀A\ &2̃4Z+%J2fDX-Yή=q^ٜK ]9铋'>>T?V0 ꝇb~XW'o?B@+GawQrͱ P }~[5Т|KtO !B.tXzۋD]8ϓ]^?|%< n t2Hj"Yc;cs;cY}م=pG8L\9r&\568ׁ g1;J2*cۗx_c);};y5N,i8ONNX=`Mg_J0}rɊlqzӏօ|{XxnKb} 2J DOawB`)&2ZjRlixܻm~6rS]xdimZ$]}bmK,wI 7`7-գ-9ƙ+ZQKMْ݋̸fUŧzYWβeit;E;,|؏[Zb{7EGwI-/5x-)LdaL^/G/ +f/kuzuŠ+3+oX'Y^]:DzիƲS]zВZlĆe0Jm +:6 A1z;&[ HIo[N-i-H*42yHm gtz1A~o|,oQՃ{f>WMp3޶*ylxHa4,յBԶ-wB;:6q?m{,+ӹ,W˖-Mb]Q:魯Z_-<<߾WeEM)oI˗g~--wPy!>z?t5۳dvI'voN얂 x6Bn(e)D3 ) rDHQi:}k8( wۖ}i%IK)|X2BwY>^ݨ>ƱuhXN\PlC[fmW̮]*=gɊ0ɼbS2N"a*Z"GEiK:u;esYdcyܕ/|؈$H /L> 'rJ֑J)WR%-I{! a`3Xj3A[%u %5sT,JP)*2rgm02gV 4xv:p%= w&]^q_m[N#df:[TaM$n1aR Dz'(.Q48[cυ)(ܣabt>d EМMPxs"DEv#I %`>o=ƞ<>CnOuwS7@* V:_Nt912GKFW}6.5>Yz y)wL)X[+!cNS=IG)FEicBf)3/thdDuj~bN;>׽&=ټbYdad~e;^2梲r;my{%3[ϧ+}>LoT0 ! O\/O!h(xz@ ꘳ZAK#8۬BjVFiL`/9(hyˣŲ YMV+uI4+4 nhZjϲRhn"N0 ФNk3c^k8{]`\Dw2եX&_nî-ӫk7_vƦ Q8rwιqbםϦl]]ofM~8ҝܹk:9{q4`3?䳥x 5ȯol~sn6_|߸[a2ooqw7s+{󻋹/{]Pܧ&%\Mxo6]ucx|7-Yl>F'-ݿR6OMHuŇZ#;OtwmOMF`${l8~Po&[ɠuݿ-z8֮?kY!*VHF$J,ܴ\kv& N=#gu٭E$ l?zB$Âa2fɬL eR9,QPBZ QYZTKF oĘSg+ gF&$C<}rƖL[}ru YH{4pNdƼ)`O]p}<<2㑩q$px0xw2V%Ah1i2Efʥ#ԣGA} mG{>) Z a%F΄1*/YQ46aFƨJêKR퓖EK=.K&jDAGPK”9Z|Z_ "PFƬC#+\]ePAHI&c*TEf,-ʌx=ԴPLs:DU\7-0ܰXҒJHԂ`##L*Ӂ/s9)|$RB|TPE# J!XAJL,QlEMO‹B>NR8D6A.m hv c>SrB ~ G. dDff!˾6s6}>k߬J}B1K2[Ynm5}Ҳu-//[Z/dH4rR0NSc3MIˤ,DPީ) e%J711YC7#ɉK᷅ff~ʜoJYXjJ2t.K6`Q8"XoXBCQ'%It5$la#5\ Fd ):KmR97 svpC;r0r((DQ#h"Ts0(w,KQ%3ȁ:Hǘ9~?<)G̥^thpR&hENqTi?Vl^Qe "I!NHU<Z /n㇯>U%G)|Ф 罙_G^ߴVI%|2LM9:7+S2ٯ0hfE07 eg8ycZ&W¹@|C3nߜ ~'H?|?c%[hd"J^3|5J]໿8Wl הɠO]/ݜQ{,G `kjl:w:}< B,fBT~O,d99rSI%{6OWع 39XrYzǜz\VdG<"{arq[FmO-dXZ>\V '"HXRj4,唡V)hS#Je&0Q(Sg bl".m*pfw_Oeq˙Î-A6NP)1fD֥dKN NH3oKɈlķ esdž@jR"r'woZ!*țB!.ޡ'Xy(bW_;ƆaKθگBkG+ N¤ K˿ɬS{h (DJ8]pT I#xPE`i~hUU$Z8rRK>:̕wLrc .tG,鎼 Ҕ$*#bpwBHaPIjd43SOn'9{A7s 1T Sr=1<"ѝyIOE^tym :r ‡ l5+ ndh{΂CIŤ6k:YWmek2~d]K`&P$c\'E#E6yY]!6Uwya`l637tmZ95t N^+sP[Oں!0L['7B> Lp\aYobr^ Zk3ɶ^[*ӳ^)yphx$uᨺX`r8&ǯ#|4җ >{iLVTe?_|n],N͸HgWR`.m7_)zESs\LRhTP{Dž띟|Oo_w~^9DW9$$aCmhnt?ߡkk*v{|~i)bYW"DvWUI J:-e~[d_ c ^l`צm~_S3b@),Q*lenrdƧ? S]ia y"&T;JVcC r-byڟVj9Dt5.2o=a˝Cq+畡j@K!e-gy-撀bCNE$ez9!C< ٨˝g>_@’(%SԷKh֬A]GJ ZMJb})O)RZ+R2Ii%#^;; D)k:atkt]stMW?ޤxl=e+16˭8ZM|Dy'r8,XKSi_NXR\ 6bBs"YQ\a5)@D1SR9Jyrp fHpQ=lsER\\ UhKr0T_{:˸|tM?zӟ$[=Vvo~.tpn9\K38nPnmD.JwZJHom-Bka^:]4a*ٚfuiC[WOôzf:>qCe:,N=IAYm{vNbJck]Nvw\U.|0ٴvy~sqvV{F_skxa@HrZI"qiè^f@@(Kueؕ wB!X@ \ HDc@9q/PQ(037A`4L"emV:Jd̈С-Q"=r\:.nV}kk vzfQ|@G~Ö0)j%x4剅W)Y[bAV0-?96P+!xQPcj/kW@JqFQ1#5pb1bR;Wa!P Aj$E";Ý<ۯWA$=9\ރ+nSeJ|=%ỳD[o9mSMa -J%:qM@ZR@#(\*4^H_8~??MMO^{~DVX| >h(ynۍv.aNF%dRgEXU}(lp&5n_Lx 2<׃okt * "ϖƔ}i|뵐:堜Yw~~A7D,d…RcK z'SItH/Ót^T5ƕ >}Adhrꋁ_ ^uLw/7nXxnKb} 2J Dc%Dʙb!*ƭ-wav] rqMґQ^)"%I/4;NyB"5hg灇c1}gF0e޾}}hʈ^4utЂbnzzwqBK1rmy Bɥ)ReIg)᜛JJ,򢤞KJG;Ko((y]3yt^s[`Vj Z rc#Κӌ}-{ͨ{,u^{<3,B4PknR A))Q,2TSFm= tkK/rVxUyf-zV*!Ál8H;! T*C/? }X**mU5%⪴c$܊z ~}VLj"dGiP-gdkP.)JS!#rT7 }Cw.KoZ#.p;N\>itIq&gō- hY1\98:X:I/ى|Ja|e< "kS22yTQtTQK&RIZoV}X2L) GJg$}ʊ/XI!toQ=Ƈt_: u2\7i9>;|&oa3lymrqZh+a ^x(fܳ}tѸ8S&'Gddy[dqv:ك%!,T6>2IEN\* A(Z!Xxl`$DV9 ѨkX5Q*k2`bmTR$?{ǒضBV !.98n^ڲeI}|ꙑFܤfΑj+E~He6Ά;?MQt|Y(Ph%£$7 LTl5X&-br ^_>|&n7 Zk'()[e$Ϯ*ÚιIt##Y #5G(hd n/Lƞ [;sD*aS!l)'(H)IP1X[[|3cl؃Wp2fk~Hߝ_C fsC<;ɳ m١aҎm@^pziQe iڃ!薐J&EA8&H,4Uݤ*,sAKlluEuewRo6׿~G},cͦB>JG>ӣ+iQcۣ^?{m^'N?};]xXE-I*7a*7!.yu=umM4\2z~s1͍HϙtQYb['7^%gl{Kl73\O501@ dzM^V%cym QZO!& u6%"v[p*9BY|e Z֘~mw̑ CۍMKdq4&.>hr7\C>!<{ֈXs-ί{ӥST WmҠn8j=7g|cy;WGz^=Lnx`3.ڹηhm<\Fe;dݸͶ'nthz57/Rw{{979{٥.ߤ.d/>~h닿ʭJSty!'#Hw| @߿@viHr z bpl3z`ĵ/v&+3}{0-eVڕ&|fFٹk4뫏3 Aw`ګO$hi@O @ʠ"hޅNRc0pB9tZv[˲ܽs+Vv3 L}&+4\NwN4"fൿ!(N)t>CbG3qE4V 0k YIl3:$pP>miojLԀ]U؂W^,p:h>*-&JJJjR {VɗHs`eS|(+}dr"l3MV&٨4$[۲3%m/͆P|j^@*=;yq1Y,:Q `^+M^2SjPn;<}@VYjPкq-.EL [R>fƹ N$[׭ևrv19Gg*x}kFLhB occ_'vVg8޲[R!P~M_Eb x 9S/}"K4[A!Hgd^ %ZyWTLRojGD m^e=9{ms4N7۸o et|R-]x[`>-[}ֿM =V6{;dzD(Y8ތ?z3I`g>GR1OnN|?~"o5\-/vvۍR5}WȌ$Y^A"#۴gK}?c޼:o~7h113jKG˯|?'|3--|va~L<94~q :/# f/jvhQ ]_^N=ѿIAc_{?$x[;cP/ξ2l^̚uۛn%C٤Xwz (_YJ~e,-)6ӳrm:O?aT 10hEPll ˻.gXڃMYB 9^$.{.An[ 3m,0Ze <#FS}|?>;;?_Sm. 7zg?v>.Z=//8aI:0PvuWX %9A{Rt"Z'nb4j8w } d};ޟIơx$}7t AZH$3Ā䤱dFhր"6=XD'mAx5ڻ#r9-IV'_Z&2 XG,@!䚩!fiE͗7օGP" HV{k1X1hqPF6YKe~䢲HO*vD \y/yz6Sj-(8=m,t']E* x[Kib~<ǧo"'Ce԰2cwnx<'ϓbidj!c2xNdq r)rRiǍmH;+O-c|OS>uʧN):)B|OCMN):tOS>uʧN)^L_iu@u`Ph;R ]Cv;C]_ u3uP^%fuʧN):S|OS>uʧNP,cpsteRgʢ}Qs7I`SZM.pz]HAIlDH/PRTnM L&wŖ]²aUf)'?^:R=̪=]C|'wb5*k7}Xw|~wbj_?*˫ FnS? .S@i ZٛnVvϥZp|E[_/p|Gs|xdh|E'jx?Prf;遳b4^_sNH.YϿL`/ Rɂ.>h$'1h!r-ˆAF'E9cch!蕷AGU$:z/)=(Rk-ИKo!K_.0KG& .k6SKo9=N-= ȒJ6fS=oZ 2lG"e@2|6ڮIɔdE&L-t,]6ԮgA%zqlbO.jGL.^Xsp:F,Qek2Eg|jK;%& ΒjŽ}}A+A!+cRYZ"8m&MI%,/E@ I!c]^r9)zeNQ DHRA- ZvM5mvZ)*!j#۹~*WuUYɭ_A@' 6 ƒ, e.Cfi,(n;嫡'6GIRzH(r4`© X+{G.ޣݗGubybgN #Y[d&0Q27VR١ҕ-%۶jΚ(gUٍT0XCFoh^@$m!$7VêuV 5ai"fs@7*eϊG #D1 "eK)L2:/)J1YEWuq2/5%qW#_;<#L m!fc )C٪Y/*@BFΕ1s˴L,6uN BLzh}iܢrUsd^Ǩ-`$m5]H1$ʱ& eAH'<#O&{A|ͤO™—__ KUkNL Z{bH^"A:Lڧ¾((:c< cC%Ae_۲5Bl qr-$4!ʨ3 /v)6d,AY~u_v˵(E"vh7AaD3{,FcKqџ,/Kamǃ޽,I[Z0aKKn<+8?[ ke+ep3[Tb6:'l|;ς}ۜt=e' >xu,?lxwi$_iݕz?"͇H3Zi3sW(MWjh 0BG7Eթ:~y` ˜#Zhp!yt+v  n)"zK+D4* E/\#"ƹtF*P*5 &GϬd#g˄sɪ{lM^ wp= e/-WFj\>)aߤﯓq-mTMiv6渎X(C&9/<a儱Fj@,>ji*dJSaT /H#xP>,+g(4p4O*`9iF;&1Yu)A#/4ceIRYqOa'aYSuFDAr-po4!!Abx#@ħ#/4cE yE#;*d8p"/:u!^[n܂![NJ`$5 垳fR1䚩NE;yzKW5nyY2n’|(wO?3:k}?)jU(wq8 'ƽ>4mUih-9Qsl?I"~^I1Re1/qf|O^#Qbqouy>asXӬu1m^<>Z]!{OOnf-j *؜[7tm~Z5l\aCm$EZGby۶aH0 a7{`!J~ UL@i8? Xh6f7Y396-8*AG'6j\bz>8V 5P85btvb8MةN+=沢*zGoR1"=Ii3QW^%-}z.n$U tnI/??~~#&㻿3IHV7$I^hφ__054WMꭩ8ZM|Dy'r8,XKS3NXR\ 6bBs"YQ\a5)@D1SR9Jyrp f#g3\pYj@JpkTbPU95 P F-c3uQȯGiEo&᷍pspNbLz+;rnaRƛϟVIHiV%SȔ*`Y*8B+Bur"Yb5nv  !f_fzNjK&Lei]r O7|n]m(fȾrڥ>Tf7E$ѓﮚU^>ujk%6ovd ;>Z1?ʹ2Q+j8_ {%␁'\e[0509|0t3L钇r4 VLQ-O h"T J .u .HY Cɡ}LWex}U[# :hbgӗf`8b`-G͠OanOń砬h%>UA"ѧz^!K@㡜)&2ZjRQiMm+e\ϒ o.]vۧ߳͸n06'ҝWrss\ۘ fYM77 EQIfo4!2n9e T* GpnPbBrM:;YeUk=t5xs7e{X Z&1C;Ei9`ͤ-<)ڨø wcvzYwWWXS)`Բ=r;(Dԋܰg7 LmxzyA9aISr+&N5,iojH?'m&i֦p:Wk('rrCxCm7z܍z3k>3g LRYړef6oy؆Πߎyg=&tqE:q@q<{'/TZ89IvE_ػZLuo E1o&I_[nS:iT:2+%CtQ䴱9EsuljpV oRH䲺L299& w?v Se3oj@FvWyӻsotS# Xyȥ\r%j=GJEGHE59VzA(2%T|~;"MIG%Q yQR%XH%b;tء%GvCkY~ C^1tA: Ռy2-<0`;;z=, 3tuo!ÙΐԬI$Q ݀1SR(A7 ˩a_ h<-Q3mL]);ʂ2|v˼ $ۑv9UB/S-Hv3UB=#Z"=6 \s pLXy Lyer^ʝt֜fjr|%XnqퟖÅFÈgc!(Z5DC@P(D|dbx!߳-~Drk-Too ʸ7hިK?D.e@\w "7WA뫑,H%`H[2B(Bq'--§biܑ4vA[\Ӏ'VZN7Aõ4bD6cDQg#tQvt¤r"DP6 cQ8D!祴QM)R P`-$NeVY{\ؿՖy#:l˥eom}FBu+P{8SKO8xWwm ۗdM \ ڮZ"$2dTP!B&PBc<:bA#B *'a"CaT8eڨRSr[IQ)-dڬRRtRے/9+,!g# 0O4BGmNJ3 <"g) /@ڴXmjN}D`N b/јkQNbFSZQF1rZT,]Vh53;l G, 齱>g+tBR^y3xB'[OHKy;س\pmzN8Mo.ԛo8$ Uf@xuf?XQ9+9Ru9.&(J-/Y*nM432Ɖ wq Wqp֍KۮDf9FcPDY%4ϘBD qR U$ 6b@/6ls\T6/Wo7xrpl(fR& Kg3.|\LOz6}y02hLKێ{Y%;`CϤ#':NI+-d˥3'ɚoH7*-ltVX3ӔB}jO9dX9c% (S*HE%Rb]hOiGZvf\Jku48Fn %pkq4.`m\./9`H F RXeTY pRɁf" id67:%W0ddx7ɘY0qsݎiqZ,}EJ8LRT56<>s66t="w=T}ޠuI7X!r-iHΌ`9g5~d_,7W![u&Uh]gE56bYOsi[FO7B-DpibsX?>E4ꕸP)ur2k$Wq*L-;ΙJ;+\&*cds~(9YQ#EcPdE`Y)szcgؓQ@Q`Sro *D<N֮3o ˢ qx}4 Cnt}cr7^+3^iR?X_sN~JRyYri!*@%ThWђp$sښ2eɀE-2,Qi" B~J($,iQ 6w]49Yęs|~vOs#)vS''$itc990+IeFHFC'd(qӶ-^^& ֖ɇ["?j_ɀ:/{8RbD"TZU0( 22"|z XҰ;9Н/<R S1:cY0\IF8ΫR@\PQPq+8ͪ㱎^b10&$(4$w( j*ϐ%+piuON;iS,$RܭٺTŷv܆K{}Ksw]&V@. Bb`N:%M V1A58XJ# o>4Gs;!:;d$r # " CF2n0/{L12ǥbN1y/>7.qW:˥wqCĆaz;#'m/͖vrN5jVJAoASV/EZ7sJCn%r{ #'<( Vj. ؠL bF9+]#ZJ !7JS$ рx?{ 1\rN [TXHgsV).3F& q>זHJYbRk bl)PoEUo|s b+oD/D΢)%( KS#| bhre))Fn;r@2 O /Hی;X 9%lb$ёRr4DdҡӛM~5YP;|϶Wo!{.8(peH"I:D#qA!%O'{Q\ vk'YYv۽~dURޞ~:I-|H'貋B7Y溵+нQDpҋqnY,xmH3݉rfxf\Txe1Wl=V"]BHptRx3Xb)*eV(Ei|u9Xd$TKf9wBs~["K6ce+,#X~,ތ2?Cf;G[r60@.< LbWѼ|YzmóL+Rmҵ|ψHM7*pd8b5Nx1 quvEQMf):xuL߷6j$lFdi 1A2y@2ʂv"Gčp1z`l4Xr'aRtg7\ȹ6ޙ5Du<ulyT[AGoK4XGf8r38J(xeFB!2ۤ&R (6HED84 ᡹t⠡_ ue =`)!׎;.] S -dDN\JlGAܮ\w?JtΐTk0k4~%1 YȠ PQ%F+BNG|aP$9 >RX ѷ-C_$6:']4I:>Pg ejP!Z8kD9Ahc4:YTV=*b;AevKg }+t]jɟ?fl 1ٛ fD&C]yq8QMAS4dV$_{nKΔdM&FdIimq\]}iZ}45稑<(ٺf|1nQ{XA J̉_{_L֖]0^&zϜ {q m&`3 z˶iX4i7{dW̫ՠZB9{Yg6k*f>(+3KjUWN3VWjF+osYV*zA[oso3?wuaGUII9Q TFZNZ!Fլ*QM7|\2?H||w=2٪x2 |?Sݯ;LKMͦ 6Om)f^^[qy^z|^x?oUG qjQZbkBee1V}R%HqMA0wצ?|-&ƘE<o \csorhG\]])GJ|Js ]"((Q^ N@j! .VDpj;mX""Km&bm, 'Hӈ,ΐ+@ NJPyqhQɺp"$nЎ,&1/-Yh^G/+U|7Y:s5k_ ÜppG=оbc5SU%ze+p۫M?[[CWo;r??cyse {kj8_ {{xa@f3صzF#Z~] $訯ct*$U NxS@9ˠSVS&*JI(K\P=1L97ªء'v⸦q>t=C7ur/dZ}w8iN Cd5L8^N%"S+䱇Cd*5!^a8 Una0:k{vs]^_0 uRDNHms5Ց[INM,]qQ Vqs{:E~A#ݺ`- F&؂9A6P*iYn?_jFMrEE>UAb<̅Ȓ`"XEX yj<\ Zk {Hi+wXz_9H9tyi}=wLMq~Yr{psr׷ˋfYM77Z(f' Qc3k䭔(2Vo12wZfv8_l,on kN~ZO'XPֹۖw3;pYwWgf))ֶBz<S˖*qtQ;asm&߁Bm yz:-zt?q'I-;IokRGǐEh/$M7=t9cŽqR/n8l0pc p=NYl9gW`Ӳ@Q<\ypL؎ywA#6Qs|<5͎-v;ea(N:X5]zgɠ΀PԨ8y N F__kK$8Q!|xc^yKS$XV{<&gP߱7 (a(~k@Avy0һ _xOpB`-O&> TrZE>>SIMk sSdƞo\x_~:/;gRJ[sH-R5jf* *uC{R]&3ؾRle⯩϶n̠<))z iTД@u0-/ۘao&g$ۗA{_'p-z! 5=ksGw=R҇\[:ɺzeblo*`40U60}^}^}4kcc]Mv> 3KԄ.Z,SD;H/Rl)YxS{y'-6R[Ua#C7GiuA"ADtv`zY6PM}?^Eb$O8ߙY f KR<4MqƤvJ`F &Fa!fT,#chИn*8 ꅯ`BݷAp8*0K K!:;M?8%S"w{݋88ۇdfiiP4 {tΓ:dw}rI$zOQ/Y)È+rv{$b֔~ oF%dA3:~NHo9.Ox'S~'*wsͽ`l ߊ#M(AٹFgT Η)v{*.ӹ}à֕>]Xp>]-  wa)8ؾY<7?S%֛GcVAU "m * )軁*Bn݅/ ޜ߮Tˋ94oJĊ Ȁ_DxƸtr1"!yL`Mcp1h w)@IczZIsi2aQS*8O 0RUG+1%յzD2@HJW =ai{k7l#569RGj8Rh%"\U3fR"%"m/RyHL҃l/@ž;RJ!N typ7,`^ӕuo2]ߐ_ך:(x x+9SrLLVxTLKLH]SwONJ!Ն;^J]Ud"q|x41(fɷA\xug7MP {ћۑmCx}im68yN(JS!=!;)b'v6a*`rT RG[gg8pV yl)ᄑ,A*+ڱ 7մZ=$\Y08H@`ZOxrpL?&9&'~Ev:% AmB4'Y@s_{UcBTmM1hJU-W3#ʽz3/X:AƝ~t~o2 }i5|6褁ngIXgIW=sNZ=ONXi/rҪFNZr۲"Åk7'{c˧@%*fAV8ۮ>|!@Ǫ&gwϻo(W-,j\%UwW-@YOqZRWD.컺 PJzRW/Q])KxO|s8JV6 +=DqDHɬ@),Y,7ݜ\ɍID8Z5L ͤd@C6PҦRfi*DY)X*3DPõ`8T3Gaʀ|*lN ׈^E+Ԥ}K'KdDT{8XW Ұ9Xh),VP!M8yPBXE4ra{Tz28)!)jXd ' jlrߚ<'dCK?zHw om>HO_v}`k%D)a7vﲰ7&p .sͷ|,e=Vt,o=B^M嫕 1뼹O; G2|< Q)_\7G`%H3t ςr&hpSUqe?ݸ翽WNvS={bK ƵL:aD% Ai !vRxQxJ+1sU֯< +97/c"8Ndҏ+ žIc/=-Hkbp\G>EYfWLn8{sMnsmo'3*&e.#.n?'I&Hj_S0#U5h)HƖ30&8vhAb=k9/*-/P'|?lMkIz;%kHB >Ce{g\Χ1p4HF;K-F.2,C8W]^p__{ARa~Ӊ.:ʑCl^Kd͋)iϙw縟Koӳ'_):ҥ={?$)\?C@0'Sn?.tμ2cCƩzq 1z׻O@NR3$oђ\ϧar4Í L&PbM֚avB6i(6+n>uv«WUݧMJNfs\<[l^W7\z^~UDKY&rkZGܐ\Wo׵ZEjı};Knd!b9Gzz Em׭sfwvB Ndq CXoa3U+f)FL#"YaPuaxxD.jϯmJW_TE !vE®b$o6H| #ys }V|H娽M.6{kGaKQ(Be ,r[!kUn)z!d'!Cg*d("0v4m7P̘!ҙJ2*u&j#)G8iڞNò挪;0ODI*TLaV=ĔܿZRt!CTHTW'.u\-vع$jqV\0A _Bs,r,Hec;{I,a,ʼnq𑀼*G=\rI/]o U Q~yjZ=]򣔓gښǭ_Qe3BKa3$/SS/ں%)[$iRv|8~J- ۞ZP_A컎ARL8`hb!-ar4U'Iu~J !:c.UtZORRqrU*,?B*I?̛bL۫v|s g{@ ' Z=%\!g[V5(ߛIVלfyrD &H#TO W *ب9s<$p֓ @F)c*q6pH0xQ,qB2 ܰ*QpBG$2<-mr ?a؀w70E0$݈<ն!pF~Ac FTr񞾟X]# cF(Ӓ_yD H!( #EThF**$*LQr5d_}(rU~8D [`.tQT>M7ϫh}wȲnT,P6W"F^ݝo/Qܨ?Y/BGQ9H'9v(5?}:>]n}򫨸 ̓ܦE ‹/WyaK U?_[߆׍,NBNlQ*zv?fpD$wOn&_uޫmYF4&] %_υH`Td0:xI~F0 &?6$\NB`T#Ŀ^jRKmA+RE/WIJqaxaRnI.V̜; TD9LW0[Vg,L[\|} Ѣ,; yhB_+ |}펌98۲sa@U{ى3,V<0^BTn3q2?VJ8sl^jnNP}G[#tq ٙ25?"c1gJK l&c4x $fs#_DH!H$yjTt|n((XWfvC|[HTRY]!slp$.GYͥ}PB0i hX E<Մ0jx[Ku+m{ ܐdᨽfB#THo8Q i 9\b S`&4Ԭ񊤵߿X.[LdcBh ePHLU헩$K`*cXc4z1="֬1pqIQ X+f ;6f~8:NZ̘K6T*3gi{zG}J_cتRD2Y $<hс 5E1 ! Vn)eg:󑐜vI5m= oC$ 7z|wAJPڽ20jmByc.ht L o`E5a/*&]葜j{NY1)󖊥:NTZ+$ ZLȈFT5vPNlbUX!yRQQSRT1xIdaP"b[`VVw„+L h?{=5CBczǑbso/GEP֓ u6GCzwR47 .0 ]~P (m"ԭ! Q]!F[!%-kۣ׉,k|rI2;.]@B؝=o(aN&M|%@)R!rg E)=߉CsٿXW)r#Q1Z cq@.W"}^=I BACa=<~da atOjugaTNAmh0юBOx?*k@3 ϙq֡rB84W<"~2WjFwŁ v~2ʙ ^uTry~D"f`9mR gx-2tI;d ( A9$lͥUX KY.YףL&_. NQXrKb(+n u"G;pMEAR>t<=>޹'a oQ)-][ xґÉmAPs~*R+¥Nx` h{k]/gx$\&")Ox h"@*E5:R t豅^جPi``2mB˵9hsGi> 3C1:^ZsnŇ֋؛ d8jF5J6:΀Oq Z26ʁ.Afۓ[΁]bC:E:tɣ} ϊk$lY#){2ׇĆbXV|w-o"G8a${[lYVƐ@S$ul?sלy9ŭS` enD9(ߢyϳNNag6$_ŐV^=ioRȦ JAIr N/(TEl]kcW[x&78Di McWE%͹* <ӇuσfpH)EkCa1 )*ZUz3]oV?_3\ZWYNf0tnyց-f(YATP7mb/sQh;rdLZS&EWIXz7F E'myHmevC  DR-$åçp/y%bH#nً)̱VHT/2QՊq"MB[ *%l!jV&f4/B>?^2C'Y4,K4C|͉ K˷D=~ݩsQeqbغ/@Ly &b'iIڽ<<X倊]T޽p-P hSL%Q GDkvfR ,e*j'kE{дO@Ȉ%@ gr@DSQs}S*:9X 4*A3Y =2KK8 qb´Χ`R#MȈRȚ5NmRUfI!n9gVU8Ψ" b@y4 hGB1m,0. gJMʘ[,l;<`uK(QZʥ r (sV,*@CruWXm/vfhLG*JYƁڜ!b,2Ҕ2!^;eBcu(}Y;De;~u'@s ʔn5;+sOF'hHa12b:IiBJd493wP+l+U_gI0{$'Y%y(32vIx):D)m{ys\TL>/f?6t%n6ecR蹺>,65] ٲddV绻"?&xQqw{_bocIIٗ'-ImV<&]ͧ\6&b #! R'_y2O*oEē$ۃ{jN" >M~˝rv?e6>sso{+[6绽'Xv mX/D= RF LTPa #IwX,}#Ex+guugj%T/2^ Pe A8λVNTFv3 mT9l`S!c^ 껍V؝JJSȕ4]Mst*'( Jϟ`[׬1%ڊ~Ћ ܞF c*'Q̼=wdz'|K1J(cBϽz9Ϭ5LEy<3UmpT;Tn1"ߟC`s_=Ǡ2R i#m(Ҝ2Lfό$\ILY:}lҘ\ :+!iЅ9kPR-(գ?{W8 Fc_ ԡ`Fk l-D[,lShE"/| ~xmO./aJא6]p.Dqy$  >:oNS5pN͠ZHWpm`f]BG qCg8EO7(i|E.NQiYs|s85 S7z^Cd!C;ߢ PӾP2ӄ-?)ST`qV ziozmMj]uHS3,d%ǠKA0iǏ.T ]]OO1 nQE8Q"fJ'wgN1]iIӍ nT+D4]#+:ˮՁݠEޤV2O84j*!yrW=<}u=T1E믟bmXL^ȝ>Bn .u6N6!p5uwpE9+o6P V|qDX%{l8HKx)(d~ ׆$W+}E\aUB"-\|.ʢkn\ЌʛBt>(oA֚[Siwo7H(c3} ߥeOSQ {'޶m@+>lt3#**g cmV|w$fU`p❾{u߳9%EqP(6$Eg䄦 ˞&pJ8W\fhu!QE=V`o'QVgP='5T6J_ixԬ?BwE^Znwp ғҌ(8pV D/%O]jftd)!\̀ig1Eqa(b d`J WVi+)ϑNʱ:SuA8# yOEOxozcM*'Gtn?8!Pxke8[zSX}N J{Es9q{)@VG\ccBоa]'XC off;NN/}:;'UݝėnGQoO$͹dIGY\^wNU~ BxTȦ7He8F~ /=ŊaqbYHqr<(`iPtcه+R'+v6%xj}3 g|ک 96|wE4!d#}5?i3@}܉ Z!Z*[S{cI(<ָh^̆_C]mSl|2,:=4Ͱ <50ڻcm rcn'<@ mm<81_YhP?yq$#(11hrrH͛i;D7?A IBfp;yO762QN-$:&5b~! ú:=2tA1E2i#t"sǀQжF`Bc1WR$$t PKȤñ J<{;"+]:™3'Nc1%JBZ$l΃a!:  >`LxtBݢBSu!tд`Dh1q{jw[?&R / [H1t70$w#SVF6r5~3xGOQ>ӿ?ľOo |Q©GK6IiZ\n?c7TXzk܇Zc% o_y_` udr>E4b̳x&HF7~\%_>}܅$$9L$q9;쬨,,8h2ηy(v lOWAB6h|ޖ-PZUk BB( ksg"bAJv{jQS9e'm5@(!@ bs5Z?{mA S]?7#B#R*F-:ܔ-*Vcd??mmB?4 ahCt;Y<]p׸iV pԷ;)U,3FIA"Fq*ETse"ln)O5c+0k؅SV5|/Ψ @5cTyݞ 黧[w1I\RTPAzL4x3I͐*\q.>dz*Ѿ򘫉Dkho#݋4x1{̹?2K~{;wNVC5slwxj*ƭNx;.޸8^V^|Ա_PG\U[r9lJkq@#,-.w'ycL:"guģ 9s Ǧ7|nL#Ԓ\DR~~r!2 ;oșSHq~r}Tl{]plBw> p.˅Q툂{R! v-;̆6-kYWR 2iO_|:oS&0k a'ӻWVZ]/xUQJ"3h<,LUr=Vu[{} dV+7yz$хH3sMYxRW!v>lޜƥn V%gŢvUH[x0\rtbgy\7Q_:/q8 lOz +_ $* cz(Ή I)3qAZ&+R[)3)9yAcL$%q\ɴ~`Kem.EG |iԖeY1=P>keѢF*4&ʛh&Bul]k/.<54< @"`|q,H.}khA&ɔKŪqMuC >woD|SON0^mvGNl.' 7`4H5qѮԉT)8X #.)X\:g,f38,v0KlSh;s'M"0$,GK0%5I% n?DArJqn@Zߢ|KXx4\t4d'ST,49[LͧAi0J z}+L.w6#G'/?FBLm`߲^F?p?^:KњKOǫnwgѳ} 8ɑ?Gѻ}[d/ ̅뼌~rZeMO ED {\`%`E"ZGh؛/o:k#R=5cAAЬ;8FzI)gffF78sX0xD(7Oy]$ ro ,l_n Tj-@o=HTMrMZKѩ<]lIphlGWXם04P~mogcj4X`t: ЖZah4^>ݻgE@KKУMaB|us{b]UMV.7$ADhPT)р_"hӤHyf-0WVq fԅBrnClN1[wۖҡ򴤽̇_)|IX3 JVd1\zO|'|f2FC =˴ff;CL0̥_ROd(?7Dx;ݚLIVLIYaiK ~`ȡX)&z o-05d sL4ό!`+sU1ۭ@̫̱{ErhY)&zb Xϻ9#RìÇ%ҷOw.(!ƯVsڲOݙ̛FɘMaI[Į%pC/>kphCP7+]#B0¤Ff5)EZZs`87(1 iu2TCc,2B(0t!ݾ_#Z( :booM*)%y<4"ؖI+%8/d^D4N5YG/h@C"|žwOhD"Tbĩ`(6$Eg䄦 d>NlW m.34zvM.>ģHcb:IBҳkgFQnE {Iea1Eqa(FxPjhTĚ a<_ϵfy%bDQdg߾}ysd^{.т l8HK66.\0`ԩ~ hF$1,B{7T4AhH*9.r3%\U 271fmGPk(w5 >ڰƻ|(ݏbVO%26~ffh% Ýϙ|iB-a$( Ӣs`^:B@|&aٴ*WtEb(rpCsС Q{T }??G!9is/'OrZaL} q-ґ(_ ̜$6??~wy|S ?sc%BKkB9LB`[E'z5`1O~u|i1pY`$8sumrC|n~LAd l%cBk ԂO#V+ ^xa,Qzg',| h^窛<׽KdZR0ph}>_[[㰳3Z 1Ŏ,ٝ:\gwVS1\G&,@R"! sS ^5p i _0@bBֺ׸Zp2S0pqqr*#\(ư(i$T*Éa#Or Ѡ L (Ix);( LS?4./[^W3tPFKpٸJG(}z bb cɭ%^y BQ^'F$ćgY`}ՊUD q FaH1`LA nBȞ!d8)uH5!,bЧ96D6@DrL hVr1!ϛIRe5T5%L\^~ʆ\vq.MU~2Dbvo&ΰAҝeUK ?ݹZN!(a.CF0KevI `,XY`R&D3m(Fm$X$:B5BCuLj>d[d+<*UNXak׆ϑJKГ4GJ*1^l5S7i9MҖ[DG(q-YZC-Q/L[)"Vh5ZTB1iTm >GS;AW&5cT&7{q[3A%k9k˶^ XP]v+kmOj۽b·_Ȉ/ rM+(xZvwkL6mp/( enIgTw(rI>Q#JF&MෆR[0|-Fyrk"b-:pĹˉ -hnU.&Ùn8.n&[;a/6!R1|1N]ƜP݆5 mb"ccڛ"3;S1H"o^@=!=6?>s94CoQFÙ; ~;UMW1Gnr9x}q|={xx5J 8{R )U ~?,>`YC> e 0üZ0KNbɿr۔y]~E7q?윶izw5 @'/]}w\}6ضY,4fuWMwǰlOSmx#\rX–J< sJb\Ҩ"r^bx0fBˆҶCk?*XėDtЫ^cXW)1mgxuQΔySvФjʍ~Z 2A͞\-oӦ^e^YO+_W+5UpApPFFp -t;HVAit;whh XDKdQ#wQKfӒ^렫ׁdsKޥ>Q`m`G:!^[OQ2hOQ&ǰp=?u@AO6w8HQ.0pOL#́@\TS'62%C:)t?*թ-/Q ?IatCL`(y'8 DB1J N.KRf7ޒd.oGwqss8=.?me=jLoӤ"H%y|G?`j`o 9<>h654٦.XgQCu2Rp%w 8`6UbAN6mXWZKV84vU \zl]>^YϷ(:M-,݂6moZ}HAMxGY/UOdЇ1KȊCqPS%yxg9P';5?$m/q`6wdhexۖeĚf]g" m' *5:/VyׅKeE[x\GWRW6Y(YYѮq.[]T7_@teC3v]LuAQ7 JomO tA P5iujƒº~K ;òq v7>z3 L_߸+ЕCaMw7 % #¸`sLBb40I*„:'WZMiJ4mP,kʍ8zeNB ˣGYxȇg1DZ8^vQx\ WGxU ?"o>O }ܬŸ }sp Ad2ϖfGG2L- 6G[TS8MRc`iUS?sHZx?mI2~Yِc c/ፇ{ot0E[2pV$6WjEM#AI۠ҦvhTt F(o ~R#>dFZ9DŽQ:RCa9Bĺ8{^&e:t X0pJ-i[ambvSug;jwde61ξJ1N3>7}SK ngz8_!`(~"b` X}ڲ$J#߷Fh 53%mX"3)/IV[ܔn^[(Yе/vk;uJƺAYcnyߢ d5ސ0t*}t86뽯8h JtMP[b1&iHBOBWW["' ?NXһƫRMnX%͆U;<klQV92nU|4{m27l*_;ʍEjSWHɆw'U#IV7 >1,zHTEZJ2Ӂ"+"2z#zx^ղC\Ӧ" j<2*x߽DGP*U|Ak:K&kЖy7\a}#9i7šz-8j=#Z4?OHy,|||2/?;) $na_^?6Ƙ=9 3[s4 J"46EҵrH>}s+tV.I`VeNyn<@Ľ#=C22,iaRiWE(M{ݝNΎ?ʴB;{;]bJ8o]M ufkL 'S}$jozsZ1ٻ~c)jwS2jeoiu|a:Y|5=MgNEO<{5-ǭ9>Tpog${y~8hH"w{MGڻCe{:㯯0_՗ < tu,o^Esׯ_}zͤ~tV!yk>jcN~;WBB*sW#Rb!ڤo"#&2iƤ=r%h wD}um rnfrQ):& IZ:Ѹ4%T.""r9scUFk:c i΁ 'TVR~(2Ny)o4c,k׷_zV58i6zLB2= ydhn5@>n+QGK.%7ΰ,`!FRiYAʞGo%"q8Z4P% pec>˙F-CV^(Vj}w1CeZiB3w3Yv;|Jqm܋)D11-곂x`5gj_0UQ `ǗV7J o ?:J)xذdD9%'KUvQ͍ٱl/ ud!IM !B3NxJZhoY/&Me0Y1AJ27B^tu^dc)5~z=!2%͘Vo'*4٠$$wS9ڞXF}:cU2;ׂNZ1jrT?.~@2*֭aYSXC]bʤ $ޓl3X άVYx8o hCzۈCJ]F[;޾g!~pw_6wɳY3Tn/fϑDXr`F&ɻk]TbY]0L9ZuZ`FFfRtUPw7sájDAn7#H' ft|@3ߥIdI9EG%^0f@ȑ ;5|X1wa RKhCꃓa Q ceN$1V[~MH}s}N_0lXgA=)Aqᖹ5=梘z r=B|pkNtZ5eTSoGXmʣ82 lt$mܸ> TU*migRԸ~)@顡o+\0O u|@ǹx RItϓkC|nkR֌o`b3Bb$k0j4PԜmz&(QL#( H L F7qQOGCvDz@`,@jıFW>fk:"~t%m Eҵs NcG6+iFUH!̀eEEZ,H3d5$V @X˭rsfT Ր\P/o(:$wXH5^ڟj:tEO>yxwGiG& /Yx8;$a|OIôBC//$ ^+`{m01svAe|"^FF"#[.׀1Xk3cQ27&;쯝0 2 '}ڙ;R&Kq.b?7ُ8wpRo w\"$W|H>mN:fCiAl=6<򵔅6%!91ɘ1qt掾⎾ȎKW0 ^cp1d)2 0pXmH|'|r,"):sG߁zqG9&hҤ;IJ̀YO=XBd׳f dۖ(hjKV>,(YT["r%1'hUA-`VTO% QrЋ9},|||2/?;兾zK!xv|xI\~СKt{w3 c, 6ĺ* D탐L]K{s.\ʮ;HU&3im]L&XSVhe"f5qӿnM>4q߽쇳I(m&S M%~67眑vv LKk3kd҂S$:n5D>ӢˌtkeRJ `&K+㯯y5u'={^U̽C&RYyVb`oFX޶ٗ 1:Āu>e,>$=8"*r!zZ  OQ Ƣ 20b5u@VaL @A&2Ťqm);JuTm}9t$N{21ߺX6̜-O(*eC;4ևܠCؠ  eҦ EC\" &o NiC#}>F >帻:4m"ȚDJ?B v@f̱lކ<XKG_Oj+RdA$6I8VRȔcĝJs:Йtg1`6F~t(NG{O# x֐_}5LB-h9cCR)!*e iB"YpQfLJn-m[%ks*<Θ@O<%p9φE&׷>PU_zNNA+4П#a3;='4~$nJ"R5>d:o(ՍרsZIÏ\N %CH牎A#:$⊱ֶ>x-fx q,aKY- tJ{\Ǒxvty̓&#CMWiE.1pZY=mُwBaf7&C#C%p ` &$ԀZf6+[c;_IG'IGOu..E C)Nr-U^fA|OY f,x)W Yj7D5k<%ĥbFqd5c)1";1tt@f^{`sڳ6=r̪$ o'&s1em]Ǡ/BʀΉ}ꕘw:r-ME|][o[9+dX,6L/b!d8j[\[XȔ'>0 ے|aY}d]Nuo=}> s/Ys>2&_bP,| Yi^/v^fc W fsK\TF|O=I-m=R1숡vthst](g7+7P섷ޢw[lvN DX8q.%C ٸFV&ns+8(h(no+^va-d")bb_ fK8}jz-{طO~5]?b"}E?-߯^rE.?˟= 18*_ `0.uezzJޛ(BbZ.տCMZ?|Ɔr[ʷWL.X7ldɞU27h4rIvTYc0/mmhI lBl.bVG7w)Tͭ|"R)mG [0mZ>=0R5KQgnhQGRLq&k <]&ߙ6.P*3xu= 2De@{/NS Y l.smNDDW2Igzi?|୉g(݋!`^ENL;3"8ՅL,)$1 )_HWI@;yiM4(ApWb)yY{y/z9E/,J5dMkGe״^UѶdr+^c2g=NpELUb}+Cb*tLPaP#R+ҡ*# #rxzrhLPlϙ"łV ňyp=ҡ~=w\WoDQ-ɫ(Oxo?zpA$NpGV7?X3/xjp t8 9%!a`S*%6ס*)0guщQz 8O jt"Đ WKx{z.{bU=[P\}] t!{ zx|=C>|{XB,[}287XjǍ!Ȃhʠ*~ȦP^x)FNftPD}b5rKQCjmj 3&D+ꅢf3y+V+E uN;7A 4iC60*d'm %FUZ*xMDmL|Ȭsg#ހʡ Thb>?mlE}( fc#pܧcwIpV!G Ţ e  6ѐbYf8[1fYHKQ J4^/!yn #䄽N͆7P+-]LufPbVElB]I791%Qb\"ubT޶ V44rH5Zc2 lc67ʬ0W6%{U2/cmf}MLǞ`\ʾ1ׁP[%֜h\nAYɍ_5%1/ث7|Yz˘t3}JG[`?9089-UT_ܡQOY^x|CS˪}L#Ǔ5E}X) EЮ*眫| #Cy@ Os&1lL9d_Fo4:xQ{ώY,b4"n@t|܄(sS,,u1 k߶H2 kBm[6'ώ ͎ћ}vxQ8-,`Es.lGPY`mZ?'7UНbFh % dSryјݽ`! ~k-3Tznu:Nz,ϐN;L҈ Vj\b:^/?)9"\])\(bPsuw_R_ vݐ4c+Zm7;I{P %ZkLjfꢷ$p0̵.P}!]N_%3s_ҋјJOlD^{NSo7}V#FWg#$?}UPPWS1!z.3=_KŋT: QY-[ϖR3=mz{^3sɊ0vsqQf^F,]'陊T+fÕl]TXVOBx?Y3/OG[ }<ѷmٶmٶ،lیP[Agҡ!Z.Q.(Z`q5Di W+0~}ٗ7a7x׳-o3'3o"͛H[yG߻,> [kyb޻s7"~h3V},XU8c-D*O\8xX6FgcUl|X5ȈI17nz{?C0CM3l44s-Y3.~3IJ G6ZNl{64* O&`.[N`Mq;+eFܛ|ndFIάff54Gj[fD*[H4s"iްYmoИcv7 O Avټ Π n7}=  u]7sǀ7f jog7Cʈ="69F[fh7C>- Ez'Odrq2!ڳO:V1 =qr\wG}[ ?NaXStnN"7_ֱ-buZSXQV5;|-@.D_f*$ŧ.Ժ?~}LI2 ț(r8Z_I~ޱ 8mCbu ,` ۍe4P#xy1q"_ N6(]=($Un'--5m`xe״^fDѶd,ꅊYK&ʥWSxRsVWCI;\~vܖANNW,neb6Ayu .u5*ql*6w*us+V4&`dl)]лueUo-odӿ {c}4l=z)[OzJmdG5 Msrhg7^ |420 g\~zM Z7ks(lvރ=a~U*]B 0)KVkS&Ð5Ec({\૞fل{s6Yo|KCrIק4tq͟G#h듣5"nRvM͍DZrSv:ox~T?^ЗKSoiqV<) GFD4>= @pTxqId9hYnФ5ynmƃJB&)O Yumt&K8s |cm(7Ըdk/4߈,Fq 6q}c}IE]T#w`~}h7/xt"O:_ 6oԹK%xmSk1|^+L!`b!S&(m .m9Q,v#XؓњZ@#&nOi}|]SZ_O`I1!nw"D~6B?.M'VU9 ~jLMpO1fgZDք$D&I@bkĐAE}/+֫Sdbd>~&k<2QUD*\,uYYotԞ j!i}k PHޖ [BРhizly-K+)v%.+`4'Qd-AwZ )Z)<洸10&YK5up.0+|iBVή0n8IY̨*d/s֏k8EQkbXňb4J̅qbRd=,CJcu 02ayͧ}VWoGY];\ZoyX\6S@wTWWoNlsq+_ےq{wȐ%@KVbZJo=UG6c&8/$k^u~uuq=/kZr*>Wsr0XYZ0a+L3I7!)jf-p))=Pc n! yM;n`ڌrc/Cz K7DA0Qq^lr4k|NJ2$#ȭ.1p"q )GC<@D8hmq)Fv|K.Tn\lؖS ᭐B5FZ МR1 Pk W%,htqKh2u=;;mX>UY`a@`pL#Ŕls \Y;`E=#0:)c)vBl@Kv TK2/gr@i1dg#sf0^' UKDPX{8"" Hs :_G%$ xFJAŠ}rdFj;^%gR IBHRB 6nh$3|2];$4ykww my2BIx[W sT!7d]mOʼn(1!f$:vIn"A("'p󁒵Oo{ lV5Yd?/r>A EPȻl8Yj|}D+C-s\PQ@u6rYTD5ïN[=" ‡e_槟⒈^)rfJ=Y!Bho9!:y U湭˒Հ_F$צx[ɖK<7"C>VSLZ85"uÂaDnLwlF ܺx̧fUb%]#ZP?v^fȰU<쵍s;FbyϰWuSAx_ޏT zYnpS ɘC bxy*5VLJߍoZ9P UU@tbޏQ1#bv_~rK+3&C߻zY_P3qk6z칱a{"apBj~AC⻙8g+3OWx-WA\0vuO0|Ӽ0햻&'?zd6Cю4AGw2֮<}05|lPd {Ani[~ؑv4|ǐ@hz^o3%}|~ Cؗ)SzioB{7h*%cqMHAS̜␡ì⛰WNw$kv,4qn;Ԟ]Q=Ppw6 <=kM (sgvE8W0|Gjh1iOV^V8&g_e8 =mxR^>3PbV$ľ?_/tgޯS&̩H,AWj!)VJ\kR]УpŗVGo,V;|A%3O)&TRy,)xjb{oћLk Dj _艴}5?IWydl59B4tiv\/8.0vҬz8&`dc}]:jJ\ojťbk9KV.󲳶ii x)18)8u*niFslʻd'ӨF򃅩,2Vat4;=IG1C<-?`[qB?nUǭ [~{turL1dFG†F*^9fΕN TLږ<-79+"?HS>|xL_c2僲xg.~9<8ɃdZ+gk_w۳U2+mԁڕ˚7|gES)沰ˬ0RW/˷aԘiΚt(CҨ1ԘQtHS]/k;=7F*)C!oHxmBke$wuAj:<_V۩g!^M7wčRsES}T߳ƿY_V[X_e5tד#CBia1Ocsl9mw;=o;ņviIZE x_~ӹ,pO9(_O|LJ[y:rLڷnx{}R Xs2IX`PbdСH1^[ &o )(y6A͑R䜊%1e ZF츽=CZaI3Pģl]^X2&`!tA&Nr:IKRl\_[u[*fj-G1SodQt[v֎[q+[;u˙OV xXr %Y8 1Sj)1ILC4iß>ɺіu|OhxbɅY,wi]nZ[߬l#X'g<Ϧۆq:n^m븻Ս`aM^HD*S(T[e, Ș\cfK]HNnv]WwxwmM(׍DxW9"k~"`֬\+$0϶D\sOpOewScuL"t*H/ @`U~/҅M6)]x;y^A0V+bf҅)]xY1ⴼzcy+ܔՔi,}d߮yJiݚno rLuuO x-iתkJ&gl0@3RʯN ~J^Qr߿ kSr,~QX: H^Ar礑Ⱥv\ZWw}fvڑӞhvzCpkj&dƽIa!k->Ƌě:iC?K\2(ۃ_Kn%jt~={$|}U.wqq>}F#N)c  8 tyt$yIoø*z֎.6,^ i;E28<|IG>Yډ:擊Ƴhg$DW2n3n2>:ns/ P%.מ0Aہ;WɭM~k"F2}RÆ)Ϲ.81!9Wvvvِ!n){OYW̼dZ7rHo G 0($cD'-05ÓV=K&l*Jw fsmf̩f^>*/G~/`N1&ń3l8He3N5Tp~Y#X7uq8aTjOk,q :ѽ'G/˚uap bx{>#cz]dQ1?݈;vu6oLjL{ cT} L[N1" Gz^80pET>^ע~f1 ^9X3\[.!7tjzC+=Uo˧lvjۄtOm`JLCcmt9L j!XyN94[zzhPJ9SԩrZmL'aMe}(rgKC,1G3?6=B<~p:~$Z\e>H=>Rc_lØ\ (9Ŕ $ M əvaz`b"dyƹ3Y5OXggd?\6Vt#gq:e"Ij|`BMRR-atNܨǟ_]05hݙ|oWXw$LaLjgXs ~P<(J6` \ĺe>b`6OR> KuNjAoMԫM۴^Ov ywy\|9W-Yy$|rX;*L,D4aF#8D(3R1m8|Hs,܄D!ԀcsuJ;Qsߕ1b\<è 9( R Po\0T%v~ܥnzACڽnigIߓ3΋u|Dn$vn_ld!|0]ݸ߸Ա ^B= $ٌl ~w{#a3$L5/'>)a90Տ^f97.2îʛFE^sk5e4'm>#`O)cIS$Yo\?S&Ox)};WUv߾kfn)`Hn?1zں}8<ڽigñ^jH*jp$ܬrkb!톴[70kv6z8oǭvFs^,kC-smH)lMμi'T]rJnxU(#A\SR.BII8ܺ*&'"$Ycs*m-(} Kyk*e41XRpg^ům.)}"(u??~,<9J_ჟl:GRAC׫ s\P<1iXZr-kavsDU9g%}OfeOؚqb'Uİ5󛔸utl[r)%#jB5nql-fl Uub,y l]׏&,#x CS+'N9S '+TK̶+ JsM0,^IVR!!98 \/t2=KSI`/|mEWæOb\ׄ0Yv6Qisv d`gjޘLP87^J gMMsڠo, QgtF gEgt^{ G{ Gy #H%[0_#ySI5ɳ~;sFsMp)^k49zC?Nဍ{i\6N=}msБ#8(=~Xuy}ˏ6~/esKāI'r}M[{CGݳi{lڊy.f@Hn?S͵S'z%i(OiI'OE67̇}u혗B PXB|<ڼ\x& +z>tvumMތEvbA"k~w8.<2LΝ`}KHEGXGDj; J4xr# 5RKГ.?kٴWIm+0΋x]8\xǯ PG:X'Z} RF>G}!vD~t9hǺ_#Ⱦ6of]#)= $ykM4.=H>G*#}GZ]ٚ.d%="cNOܬg']~ѱza݀?x囎?>?{ |~A?oQXC#7%0jT3)BSJhq3}Q#>܃xp_~8{r98 <%a8y?m^ea>Ҝ*dBbH&R&oA!^ =Ih=lsRD!nHԇa"ɯ ǐv+TP L#vFȎdHW.l1)ZyD#v{Qb%@^cѐvi[e怔#ːvC-[D_aߓ/K8]pynH='XXag(]=Ow")~u3zfz BiҚij :tP!O-A*fY9TnuT8TZ8Kʅ~ ŷ>5Oֽg2KP:v-MIzZ;coLF&jRSƙH-:z)lQM(̧`Yˬ|N_>WڿsjK͕Ԙ.6 UhsɐgyT"s )l檽TJLIgmpӹKrV/faW 5gi^n­<V?a.0[\Ţ"!KɇFȡT{iӻk9#GuGnNBZx5W| |yi6Gs5e \)b:PFi1deLZR}UٌJԜm-RbfS0Vb+R9M/k|iv7|eʳ[0IfcmfS=1B+!x8 sԀmiœ _7YP5ĩW sĂSN -AUfHb sS)SM)^cYf`$IJOQg !P f5?%f1M)b(LS]pi-YBPcsEDG,4b̮SpKNAr}.ps&Xtݙx˓_E-dIm9I0q,ݬ*:M6n S"jv+X[Veo+)O0Ȯ#KxG h@XX[>ϑPJOik9a% R|J%Ubw N*{ r!yڀ;TbW8J쎓ةi<bGNHh ^«n xڻ:&&|U(>jVUzQm@jl40Cap4FlaoPG Úfw1Ř =w&#اNTŋ*Bm `u*nx*N-U! Tő**?KJ}1CT*xC3"}8ƼhJAW"PWZ-/[Ľojp+# A, { uH߭*/gY[uVy_Aڭ8LDpqJ8[$G$d)%IH{l=ő}7f#`M$j 䬭(Twxհ`*/Z9aO2*s+;vj{{R7cpŕڭ> _7gCvaN(p ee@vڍ[R ;S72Svy;#WjW|vB#3V"$0ZXvڍ[S;u#?jW0/b,J#GYn !!sݏ=fD.HevQ54 0Y%*n@*;rfn8pC1ybJ*[ }|ҞcoD}JvQo0i t>rPWrn#;Կlھm:"o`C4E3LjMW4]Ե>u99޹#;aBnSo~ZTD%G-: L.o9B@ DHi[MOqlsd{bblz&l{Ş;nI6% hO_ϯξGlt:/t+Mt40{'ALTlE꽗M1` Ŕ-'>&}O@bʔYl/m}eě&0kv+Wʘ ;KPk.9_4l5Ea[#Md|.r:hM`},!ϡ+.{jzþ!'b>go&U>QwD=ԡ1 |X7 ԦNh3w wڲ\J Ҧd횶 9(ֺFQ SLK̡?DC&H1h꩹n/ ž^ /-Zݛ{Mkφ1@X/hJڿq{;lbaL(`3E88@[ʾKܢ)0?LJN' /?/m){M1폾S53]y66i#hIEq] 1h[aL~ 5|ӁS__>6%C L1mꛥ>{Wzl4R t01xZ6%iޣ:ن`Q퇒P8ho_.snR_W~w4W&5g緓zڋw骻ޡ .6fɗE$/?fӺ#[yma"ے@k=Mٱ'0aJh>{ 0??u/[.#ynna`_,lj #yBV砞'&螪I(Ow2MpbZguW1 lD*aw6: k'9^zח}UJ#&́~*˛GO?A%{wRPJDz D}N(S0e^`+Ѳ-rMJ]|{!RrĺVM?/DeKWDOW.`&Z.?>}@r5.4]MO )UH=ˆXʋB)1, q~"x+zyr0am^ [6DG.:ݚ?r䙂և'"?6CRrULyq Lb)? #K@L:-L#TA4˪k]hb{דn¨}kzF7@H$@yM' ruS8!KMJN@KL*Rުܢ&ud5.E e ܡSwPUT*[TtYVgp͊ZV 9P鑳SS<=v@˘]u+;=QvJUP)S"zd+zgvR#vE )Q;.7yz^RuЧ"͚Oϻ x ^PKBlC5i|N{,'vsظآl}N59trZud[wRK0]|0i/(b[`EA$ Pכ|InlAFr"Æfc -d۶^;K!mcfޅL(q772NP¼#F{2\Y6k8dM06 J mV16 Ћ+0VRC"(\Cv 4t c1Epu>%Ns .v'&u~^m< Hך4- GL)mb[t,‘kl))eW[~Rԡb`5w'ږ_Nc'tNݾH;#iR?MG~O{gK\ vL.uғZ1Gp ;Uy`6skfP 2rbwSS7s-Fr{x4%#⊂4 t\Q``S|rLm[_j{03IQ% 8{Dnw,>h狮H\uebt jd;ѭǂ(&\sa!7gWᔾb\Swm^NNa, $xu[rmǘHI&WFhfdVDQy; (ojHl9hu,ѶQ`g:8%8#G.F t.ǎ]a"Z\:Mz!WQ 15{;+Btevwב7yg~QGwhկ1\x l("iM\Ԋm  Vb ;zLށ6Q,\%>wֹ eF.^n pA8m:W7krZx8_w NT?%upkO)ucaBٺ PSG':|,\GKo׺^n*.S oiK١`]Wjo;G>ӈKLs>薢wKզnrCV?YB鷦!ꅱ͈ E)Ԡ@[PJ΋ϰNiBtз$ )٥,!z-X-U3٢E)lUn ]t |1г)6L}t~pir  .ӡ4IO ޵+"m/m"Ye  &$g_NA/&#ə ߷EZv-)R ƱZlՅ*E\WCü2˝ Ό%I1ޖb.T+y/q1 h9$MX m2r.BAdʂ9òrSZ_( ;ChsEŹPE{D\ʾS(x%c-)E!gBphƛᤡ\O"sCv9Ѣ:TYl"aYeuPЅ\,l(E+, 4xEE-CNɱ,2oZZ޵"@ Ua< `PQiah5B|u0h'FAr ˎ9PШh S2t\KCb]pU $`t!`B_dA\W>/D#2WV5ԋ^y˷RB98@Ẏ&K)YQe_I'TakMuĜFДPQױV@a!TKܦbZA hZ:+`X]:ztt2] -PkU![ΈhȘ#y2 ~y+VGeGbq,@BvG!VB13x ^Xw6彤Gzxu`*=*n+գw8z43Bs^d˘dg==lW]%Y++Lq$ˋTδ]r&~dSeLrZ0z$J8fހDG[2bw)<&(u:=_X;"|띨8[qG~b--7N ,+StlPJߣvtJ*ޓba5؂<)h! )8%㛇T fbp7Csӭ66UyJqǠOd-N[;vC31V~sO#x|?@Pn}-cjb7eN:&9[uk|*-ƃVxy bkCy)9{!bVcG*U?F` ##l#z=%:z|dd]r=du8uv9 jr\❲3%bjqޑ;rqŠ/Ԏ1Mc絢W @nƅXXbϞ]!*@HwP[0Ć[t4M)N,M[F,Q< aA$M#gQ)Ƹ7CJݷ0GByj9vcL]2 '0M;cWN6YԤ56̳$.v/^dJQ1Zis,TAP˕쩒M#vmf>:tZoSeSpD|nLǠ/:&EŴvJ`*]9H*)$Ӯ ߼e@M;a#.n[y2IQd=}g>Ta쑦 _2o 4=^}a]Q_/^>$0?V[̬CXa#RPa|c-ז2v|Ajql( Ys]S"ڔUpbPFV%+/!)nE?M7+bb *?-6BЧd7^ٳX1u4gJ! 4 K#zLMLvnڱ1v(+Nbh)vgЧO̤Ҙso/}t(CFAVXq $J؂RCXU>[Z =yu%t&=߀sua@#$ uS}A ҵpѓ:&T0X ^$GG=Z[?Z:+Hՙ;9ՙu̺Wu`AAμ{!yy{A[xP{K')НTMG1:u;謁3ɴ1Z*d )I(r7QխXK4eXr@*ş,ȇ6nx _&*3]Q2X2!|67xseVcˡ(o^ E2P-[k9BbB+[VSeٲmd ݱ3 h^) D[*Av[dO;:1v͟!ԁħZhv~C􋡵abi X Wp%:(/ۼXfMQհ&[֭j]E'FLYv8LE0Y U@f@ Rdq%1l|2(]ƃX.fxp1N)`wX3i7&@8y/M5`r#ʤ+6G !鈮wjhYޑnHCtwO"GAj}gp{5Y0Ex.Y@ /|g6}'HLjZp4j}4>eD6l!frV\I1ehtgd+<)Afm}Yd:O JhԦa=lb>+g?YXY0#( ʉ@+UT!(Ԣo) j[[|Sz'nV¨@M7筷lzz>F9?7z7-$G1uC"9VlWɬxS~:f}T nܹ^ϱowü Y6'OLA~.|Vtt|) 㪾JF߶Auss/;2,IC7QNereڹBbw勞jYu^=9^K;*:gXnhH=+aG)wY7FoDcworO? H{sGbqKq 9MByŨ?CL,޹yL&C|FewɀI6ƭ{],apo#S&yv6}`z@-}&*l]eVa!sӛ[ F*Qzzy)by]^K~BڋybYAl; l4ỼY5V@K'FuXd봗NXaorR@@lk!2QQt9՛uxay`T:sص؈wu5yc\[TD ;ˊ-HÒI0yŔΠ#&ۗ܅ oh# Gi*[IîufM`OEO ؐNav| s?OɆ*JQ%/j>]JV|Rp2nF BSQLfeȊ3VZg]kB/uU|-FhKqTƢF{dey79B2 aM5xt=YJ\ʘɐ<ⶔuWيw.cn9;vED*y\yL޳&rlW_4(LFe;#T]mL9 Ѐ!T>ctyy9,+}s4v of~~! q2}'c4ñ^d=N~pPSFK7!mRbp_zBF^*h`.eC5G ` Ϛഡa=eڿ\zOj) yj= ~y;>uk~<@i1" 94vkV~fJ:QV^%3ͤ-l5.1y/^W^˃R3oM]*N>+PTcNѝCkɸɷKd8PSnhܧ,DVMjț.b(Et b 2&݋J7rU r.!Z*ohz]15?h+w]r+w]r-_+kkəI4O)%M%[s ^L -|rVymmtFں;s3^/wi5ٯfTcQ EqX)v `4c,*Ƣ^}F+?uEKV&K%6g$LMjmʼL<#5̂լ3 HN5T TC8S1nI1s;9N N01+i%s 4M/9`9s礜yƦFhq$[q rܿFʷARlftqNtqN(a|Y- a9Rq }Aʵn,TnTd݇8II!`4X ?9Xj9`<͚a0'`[J\ *R@}RUcp^XƍZZTz\=  ;ł)$Ht5R  b@|3FJsXƁ";P1;3/>,+ߠ٘DaEpOrq1ikpS|\k~l_郬}%aﲡo_iF󶸐  q]ǍKڼM='ԋI9i%ڰ $*z Sr&4jL%Uذ3z %l%ߔP6~{A=|B P%)"qp7DWW;EٺFmڬ=kX'8]K(XPy[>k5 XLх|sC:h%!8Qp _ģG,)Ʉ sTU Ob"4z|f-M~ST=kdD'9Yx1 ؇_c)hT Ji:YMn+VLDd'& L !3VE~2@q t߾%ebeIߞڪQ#ƎkI;0qC5ZfWC)Tr!1/"Ro?QlO%J+M ;lz4y6߯ LLgy=єZpCb&RV#+J [/;u q*cm$gbKeE8:Ǖ͔c!`L[y ,i#,7ؘ N> {6L|g%8,eIyvTʀ'9<8~RwKc-N el<K ڭB_?'-0|ti2>8OVa:ͻWb͂ExwmH_Tp9Tlw5TRJ6  &Anio!$_ĂZ+d#Z h$ ab_請br29rǎ$RF eynG?X]Kv9e.qJNjp2Wf*iU! p[M*cJ-;=%(Ȫ%0IaJ>%Y01/yGwzK_1O}wnL6aI?'~ Y* Oӌsũ3[e9(UnHnudv(泆h4B#xOS{Z1XqL#R"NJL_aM8l\d t8\XZaUq״n'9Wc}4á5;pXczSuH> QꌆyKpƦrRr!rI`@<:y$?dl6)h=M-$<#ZΊ,Ӱ X#uZZacۮ?kK&N1w._FwIY1Z ҏ킽Wƫk"evFxD9pr2}`dflE o^^8_uNlEi<7;ۨ/0:8c%2c[6-ގglo;,cD2@rYXQ#W*Cmcw|xyA޸ /@o"4 jQBL}^ㅞAE0$,KϠAkugn_%# :m;sƉ=yZHSݩecRK*RQ1Wu @XT 281DTW&)|k-c@Qjcq^"܀K5Rәp6*4GpC7[RF9 S<\YV^R(@ĊhUe#Ѱ1C/I.e~haon7:Ʋ֏кu0uw>{>X(xK̏Oh5..k3|Ȁ`kcl_"seC$vx U8AoGu] 1$)`S8sB7C7%mEP^Ag˧ܳ&"Al{]0`ǧ{r6Q ' >s \L@Tɚh,Ê2&MY"sZqۡ>.8MU[M ZoUP%0<Zt(e>=ΑǓIfiF;kl r1*@pO@~Tb]Ks~LI!KИ6=X)MڷiBܕj}R=^~ȧ5^ \ #"Ch)#w~>iE\8* ɥ7)4wb36F^[ʙ9+pe?J'q@(S(oR!>yQpT' m w8DRI(H@ $ 2c.1uTk'_Nf?; HĨ3|a#J+ҭlݙ],(@~$\CB Q듵%/1DX)@VDT/I+p x?>ۋq#?GFVT7^Wextwf $1M'"fbIVy^BIRnkǧUWW{9?~u!;q~۰qߴ|sw'9޳en\LgXy}9#q#@)όt6_m9i*";oyb W #lE:ƥ >5VKBjxV2TNqÀ3eUg0nP(>x#0'r~Xo6iB\jC\%$*L"{yo6_)߀ꞽPut[ 'bCXkʼlBF`rn4,@?[ thIt PzB8j[M<.g&Z5(YA6/Fm<{/Jz^bNqew)2_!&{9}8]Ȅr,#sZ&)Ues[}&E痳޽{]QӷLPBRǦwBLFh.{hwS Q(%W{hC 1Q{hPQjY1 pt]n}96Z~rfni-%tFHE7}Tq:ƿIR`R<'#jTW7NvV3GTqa/ G% ӫd$j]ks60-PN% ?fC*Ӱ5b޻{ɇa>Ժ?Ѭ=m5da$>̟'Ϯ>u4y 1'}s W;h_w-N$ #vV~72*K1CYKI~o}q/ZUYs+mD)nh[gjv3Z+ŭD$(S@ L"\>zT`&&uI'(F ^s9:sŠhYtz(.DLRu/kkċ@BM'Y8]CqE.E<̆AKDT439gu\SሌmHFZ${R~]Z`V[; )Ҩօ$ %*WHxa'CF~[@(Va91EFJ,pÄrxQ$ g) t\;&":LN<Y/ޛs^}-*noPy('y9I^I^['$Pg".$5d(,lQΔ6%6~kR'WF1qZtmL;2E{IJ5o'm2Iw+L7{Wn]5 O[0/HĽey"hڻj^ffoȷ>C]&C!]uʐJ hO-[P:8b._qJ1Q9g@q?+D_:RdޛK+\ 8 ߖ!qP{ nnTMWKݍDKQ)Th%P )$=-O.map8 g?\z*B&*n{w񵌤9|㓐YJobj/l-[z(f ) ?L=҅ (h@jVRaoȫ7m=(ewXEl-9Fw2Nw {O&]۠g{j["A?~9COzt4 Tq2Wx|x8x{n]. D_8tr~w{{{Đj!}$q*rº >p+m>Ƒ V26U4p"E"EABiicIDEjc$}9;.axFSM=W[CMQw5s~0cjwpSt@ ۣp=Q/"O9qƋIǾP=xŖ$ɚs- b!k_ov*{2Z;k(֎kx Ro3W8S.`&2PP*dK=]lm~Y_r p$,^x¸$$WG0|-&(R o7oZn]RgRم@#RR.44-ꨴ1zY'xJdO1"LCpb7"2ʿ"rAr "褍 5)`TƍTNDԈo5otnҳ=K-55Lin?#VGPa2+Wg%=x۱)>SR//:&NIw~8>lPe!{ҕ`ĭ9D௞ח1!^E)όtS />sVM;o|P¬(R&4 + ;k*.XL,&l@> ݂{t@RlC;)3zɀ#@e _Ɍ@#5ٙKqiکmKo<0ص}UQUy;/Jwtk_)Wڵ'v{vmS{V۝ĭ=nͳl.ո[P73I]Y7r5{kvΟ:^-B,C pQ*A̺蜖yY!!LKBG颣F.$||9Y-@G0xCbVp'6. "i$,,6!7 W(dzV^g7ULUݿE_~c-)\GE!7Nq4P;S7OTҨEXH څYr pO =3 N7p`ƺf貣+Ny㞶;O!6V3of3 Jx4 ET-GxVZ$rtm#]3?7Jdniy[^ų0yènf.mc4,r#> zV5N3f_ q,(G1>x})B/J]6b>G է„IRS?qCF靤2eƝ@f"+nG.H!B@ϻ&g+~er6>m C)v! J:XM~NE4"osF%Hޟَv[;_zTFٽl@;%Ql ͓bCyrp)SrwVȴYz}֨H{:Ԁٌ#2'#=11ԉ@+4 0BY]`ۓ[iՈ6~E|O|f Swk֝^ $7\2P>GdFc+9cG)z*":;`~i}!N=~(; W7Ie"8x1'S䮳w) )'|O `x<;siuĹ\6ޡ ]"VʠDyP`ydxݰkyPD,M"z$<$h0k8`Qv/Q%ٞ[;]r:~~6.x|)w! cQ# /S46һ[SN.aEn>2G]VZAhMqlwm{_N* u wmq r6YdvA.WKnOq4Ҵ鞾h,OĎgX"_P.s?Ore^xt>2*krrLx|x Q<0PH|L^pmD!avwp-)ۣpkDJ`lJ󼑾| GoMzpw "A?mo^PÍo Rw`AK/oahugwmUp\*НpnAx}FWNFzsml}S1ՙvwܶnKkd=0e|Ƅ|6zQT _ɆG螾9ãO ͡jQ[4=)MaK \C5rS?f8(ᙔPfkb*rצHH7g뇝Ż}lId*V;[9dek<+[뭓P#zH>gfsuk!O[3p{[j uk$vޱlw8ܢS;̠ cġ!Qc@@p?zEA C7δHgXo>R!Wv%EճLQÆ,őUS/GY?QcG9kE:anO:8^Udl+a=@PrЅe d9B%Y֕*G93,I  t(&ʒgGaSL*@+eD/(o38=-tHaT3C2;Jk|#iI*z+)5T$Z6Xʗ}0I/+S * +㣠"(gr-3;Deq3csp9 2͏odY}q-$ -&kنM&?ZlkѭE>7W\Wy4m~^hW<^sn*1ڿ hϐ+fJ$NegX:+CP^s@;ڜ9"z/F[׺[X~0c#"i:g}ҍNZZ[y1ٷ _kA|-$F9_ycػo_lr5,L+R?)Ps КFMٲm*[&ÊoFN̓`Ygag%$8+-ZX0[KY%0Xt̆ }V2̠ EQ!ZJq{YH2& 3eFjf NwۭwTSfm F瀕}yr :|nֈhRg¾r/Ut\1#FKt9dy/ )Sz+t$p弢?Yh}[@5IlLz9"egsb,<&Od,ł^sOU6)y Aʜ> yd̯ 0ԛZf@0_d=] C^7j'#&d\F+6kt>gTְ4܁m U fK]"B ˢ5<7,L9'-ZZH{HMGll>^ yrqL*d!(?aYR@Nvl4sE3Jޛqԥ8zae+z7fnMd<: 599& lHѻHC0c9jthoZZvaC]) 'ϔ\/٬E}vѬ(ARϢt*)|e:-ϳ7=-wۜ]{cu_R6S}yc|Cf&na4?p\e9c0UR~ъ=dq ۆ_VhhԤֶI;5yhrk˷E- Ssw֭.Ɉ>^OnOg.^gIahyݞ::nN'娣;ZfL7R&#"qX.zNZ3 TL(v4SLd7*dZR99&hZV(*\υf>d"=>}C_>f:=X8}uTg~#{Q۽oJWPI+-')(J=r JZ:# 9bBIn pNL($A.d iѠyM (<1 OMu|K}X+lZ%.) ٙLO)icAhOc#1h+'L6h ]C0 #ޟ`?ƵIƌfR0%ΖVR@g;d2 @( `ɥo K.v:5}_.J~qꈼrWFL~7۹F~x~NiQǥljdJOO.^; `Gs7yJy8d$]Ncr:BSM?^^PEu=r[-v&psh׎Iә[=).י;9mRڭ6IjWgޖin]S@`֔=u!g`Ki]=ح=άk(\kAg"T ;5S{//O;=$T -Ei1"_(rUo+`ZᱡT[\\[EdU%ie l [3-iof>`:f٤*4 9ڢ9Zgz G`ݸh78ši8kԺ5t`4۬d\>ȵR 'e\@ZJqAnHZJ5C{d{=ֶl(,  2gJPoۖyKٱedtg6qKe+\1vR.BT3/iţd# 3 58kޓJc4EuXHL1#t|-:'LB{Qŀ@Cf$٤50_կ)렭;SM[x|c J+Q~^~JZ$?LWr3HI'J1iZzwe%lr ޲0Ӌ/uN}gΣ" f9MnnxWt+|M™1ӥ~`,A|ߜ{[GoDw*,\*וnzly6nee3P6l.`;_f/ VӼ:-dKQ?Ո >*OT5%,9(ϩ* l.8w*/PƹLZ[$n&VJ gVIRLn^ JZS A?0I |VRɶqfOՍM՗Y%L$Įô>1ҚIW7 [qh,^ X 0mP#P}SpR#5 tNA * T_oD/5 *@ 5H8>?ݟ/"cfhҞiR (L1vKDV0ɃWITHEsqL(}OZKg b-;n#ȥmB]ֳ'*kO3?A#ˉTv>/Jit%)r N(nZ c@{U$]H\Sz0bHHmV=i<{[uRO5:'Ż'Ϛ HLfs s)a:t ߾RO7\ ʄ;3BnV_V7.EYC׫)0x17K0u;w |%Y>*IVOJIV|PjN\ IsV!湱&QK ͥA.;\ߘM^K77?aNa/_ڴÁjh*Avtŗp)]RUꓪRTTuuGA9tR \_󗨫b%5b֣vFl7bӺ{BcBՐ\ jpڅXSbM}5!ִb=RcFA9p W‰ajA!ᴍm>vci=}bƭDŽ!9d_bTN'0 TkڼT?0;:^HZܨy DOO4BD!7N ys@,Z| W)(_Rx;;l/Oզ!s7W)7@CB*l, <2 ֫]7N}7N ]Է}O}Է}Om-;)JcBL( 7\g[eH3T ù0rz#pYSk-'{-u`6)@xT 鸱,/DXZF29L(.2B@]qO #w+NnKRG;}ͤp}=/R^9 WXn&Ըî KL4FFL}6bSֳO2EFP}{苜E.c^>p>+!_4|lnSTJWN&q6I2_rnu=>[܄!)Au 3b8Ŭ3l1MPnJMq_w݄HΌcy;j,{'(OAxOGj?;+ONO)>peMFXpߊq3F w2'#dO/ 1Ű2b[71ԎJ;8Q=ݻOOa2SO5G@PoS2٦<UKOev߹q& !l>3ƘY."P=,@pK NڏwJk@?{m>Uޝх<ݻOOc?E"PDхVED(hd#6u7vf@kY=^09eBp7p;oYEGGNq_Vg!7=kp)+O6iyht?'kO_2 1,1=MĬµxg3d !t`pK0J' H7{c \L8f,834agdYs 4=ZrFD s22m LI !)Aw#vAtbh݆Lkl-PBr` 9zXq4cXmc4"=c Z@WL zYt& S5Yd)bcɱ<|MtHԔgqg|6CbL D $O[,-Lx LsNP2]ؿG#Ӽa' оk7m9ENΌ+ZɰǏK5b9nH& /i{GZt]Қ1lUz0!}z0jDZJs_i˵Ipiޥxzm ktVCq|O^{C&@&kDRPYm;BeGY3l晅WDrB u2ˤi07G[ʹS*ꍷ'/ Tao'O>]~{b/=4O|:[5P-Gעd^|O `No})G&Gaf YE*f̡m7ÕыG#|\F!Z%Z]\=b2_.>?{ }pRM7 y4]V;xL0iuOVעu⃊S=I| b6ah"> BoFn(S9?d>{?w.B 4 Y(¿DNDtBB )jET GAt~]/D[ZvM@dysD*d$r<وuP`KNݝ7|Bv7a'!QPLX,P+@Ds9ʕ+rCY h=hДW77̗FBsjV@K"S0~q4B0EKnWExE-fX>sHM~\)0EjqI eOX$cIKK*r0vn3H(#e@«gY{۸+݇p6hDJrl~G=zE3 FC'9<.R2 G@M{ bN[yl v ?lVg1F 6fJG%td]Q -?Za;DZ kfHN,kKPszplҀJbU!\!e7W1dp0iѕN_|_J0B,ORyjyObJE*. 37ρ4u|tp[/UP!y߾nU/? .̳a{tH"z Ub%EEyAkV j]+a4Lム*Ew;ҠSǔ p"zfNd6TCs"+VPqY*.+LT\c hSSe󞪹L)rLEIZTuy#I`KFshKrv筁YM-v8uM%'OuP.P= {i-Avտ8G8#;Q-3%@WF:2v\0Vymx缃ᘪ)z-.͎u]aDJnyQʰ/{ϟU|VafM#څIn.r3ۣP{^#Uz9qԤb 3j5Ɲ7WDyF9mt~u?": {Nky(Y>[V9B wUEsͲoBFkn| Y~9*3mD$t߈twvݓ~#PK iwEE;m* nDQ[衾x+f Lc)PӵBϩ8j0=ghD3WoIZ"U̿cVAr C%?Yp`E;w;Ax7|n8̌⧶|?*~x7LG3v(81!~5LY{fC' ym ou~q>,|y` l}؇l> Ϙ`n#eK/G+o˶pxuu-,-]j~`D,mV(Cew9 L)u R , fFp1sJJwoqpHx* buDžPW3;om]{1JmT{jcWCm㉨\ qlȵ!>)dk "+k>r0!3$&3\bc^4w.$x{j OQf~â^s޶ki}D*ըhb#I1*b#C )3Jj؎Yh{>.[$l+Ѿ9JlmC"0"|ZmCb1N9B iqO35n)]q-BrsJHuJXG$ŏW򾐗ʨmߩQr)FQ[g٩2 e)Qkky%i%ćG;X' r_9P ۛONJVݵ:Bhgǚ]kږ]kյfcG5"!Wz呵KG3j/;Y/!dt|Q5퉄4oVءnV؈Zom԰;En9ύ i۝elۄ6iUg9:CvG! #kzxhtsӝSq625gmFQQwYmN*><\\!ڒrQGP qʺotTܾ#ȟDSZ"(qD8eҴhc퇝 ۡW.0IRx0 Ӎo~b ֟S6 iчSbI8%5smo#;iѬdC$(u[j-wXEȑvaIDcfNR1 h=Sӑst6a#V[JY3nhS!n*q$S|VN"c,"%2O=A[o'lҔe>f^$wH,w~\^18cO!8$ _Qg 5s UyBBEJn. \jC\.V֣–ê&"63yfܪ'w]2Q_T`Y pEeeɉ2W(J;) %s+i-ƥqLQkP (0]4% #V: : U)ZH^rCe)A[RP3 Cr KyJ]R!9+J of3Iw: G9JȆP!d6Eƌ\S2ԲV 2\gf̠_qUA _S[LUh,. @K!@"Gs}u5}:!):Aڵ#E0][Ю90EN%|f ӄZ䔶 X/7\!X,t3p05j>G/_}j`J$a.qs @i&L L SLQ,K2-8%Q7!Vf:.V6;2v0T+a0* GM$25@ jc3 "BU#+,AP<ڂGRQJKL(gVnTgykj}/'+9́ly;`z{=pi6`GpzcLv~bpHYNn/z2&YqnX? ra/RX<u&:5HC8S[",YbXP!{NB73)6  ڈgav -.F]5E@^ |dFp[ W*1 ˝u[Z`^kF=3)w'@s'A* =, ؚ2 4Y_K^pwp37%:Sa<6u#H/B7ҋЍލ+aDRxĕcNTb -#؄@%7RЛ2*SJb׸G:>CL sr( U\"ap~Mf0|?-*9axϋ᳏fr\<[.,̳7<[=L^Jj+NTf>|ҁRb\Wkc6oI}hj_ ^ùyQ! *%,uNf+Ζl '+Ef~m#"SwU%t/_{ åKb*빃& j/8Z5VY'|?uInxeDqTayZjD 2 yi,%/5uHBhGuNS, .qbk'Ab 38M])豦%]1j%J$E\D=0^q8~nnD=JχGOg%7?>ţoNK3dn-{ B1W/9;05QEXwޣqDi) v[`HŴ`{H ?d7+-{<\Fv1DYim`ͼ8MV:Բ[_N]Lua*48na 26f{4nS 1X3fpǀ& <- TwwπF( }Z!=!MAM!XKgd ΞQ\o74sz@3qh&(Q| ZN~8>;beڸseن6 ֽ[s-hhAۀf~p@!%v4qhW@q ꫯ}3/߫dE qs&8%ד} c ^|4ːP8\/Y]+¹Ǫ5l!,jvMegm7¡Dx®BL:|/p )Q[EWZ|eª5p(jM~8OKg WrjR<^ L?LJ>0V+o7852__U sRk {L?)BBT%6k⭗`V,4`8g 6tsb_ WSc@,߄m4۹wgKg:A@g6L8IBメxSd92MB0ܖGů^IhTUR5B|}9O?,cvv-VgL:!nњ(Zh->>3!^RHa~cTl~Q4Q4tf )8 /TK?;Pk5yP յfΝs)hw:@y$BUA3(cgyd ~jcMqdz4gzn$0Po.S?fUlpų@OdCfk}7L-uZB!.Ņ_OKncXե50K9,9B&,1eX,^ia å୿9qsXF*b^D#C"Ô3]>DhzI U`Pj! 10M+psU2΅'nνk@8((e)`S 92jɑ%{A,/y)L`MD@o PE"P!*5?'ʅb3<[ vwqjHA(ApN#7tha!>rd.jiu(c0ٻqdW f%@?IzAdfdѠD*m6|I&~ۖomɢdQ@cJdW,VUx u[>3PVºEd/"|ЎG/YV{1$ޤ_oc3i?4/eU-e튗7wvIrwHj B@ */>aXıoê bߝy608\qؘ8[1.cZ96nS+0JA\WP+I|v#9Z- Bh1ohF @@#e+ y!UQDmZBx@,@# "E_L#]׃nB'ܻ=lW^^_WN[_6ӡ`-s^fuRS)b`z*Ph)R?c[s@*/ %iI ϒ|Tgár,B_^AȿsJ1J.$-뜔c9@U&nE5+Uy 0qWjWHՄdJKǖpgq;T?؏,D"デ]+(;) =(…,,T.fsJHrZSR[gE;w]z4"2=鸵C#bw2vFpymV,SݧA}OI>Nn"agvwhNPCuJԜV ^W`7XWuGq0U*$u;w;(Α9;B ku"aʥR 21+92gGH̙F@jeN3zO,Y="v%bCwZpDdnB/X!l #:!fumL%f% N Z[()a.Vfr V *"rUպ,hث+En'.V_ͽ$&6UnpBcc aā$)2.ERBS[M)Gv/_uL$VbDs}kEMu\ Ue:+1B#q۪>?G? J\UqـK-X\zg̒ϭUK>=;gx {$(JrTSQXcFXd$ʒ<%`j7sZ452IzԼ0yw;v}jG/{(|c0ok{. e}f.[-_!8M]ʲ3lPј 0`QgcJ'()c F/>ۭQ52|ri6v]lz  m֛~qe%C?3{֛]~erOh0ݝ W3J'|z~{#?Q#ASCYvdvvIr)_@ fI !&1/gH"Ռ16_q]#.\]nVՃ3~rǽd|eÞv,yoןa_/Pe+;3}"oN<NJ!&R&.y \'ZɝE ,4;|v|2< 7W<K v/M{@tnOol 7Pǡr| mkS䔴M6tYM%1k#ʑ-Ɍ2X!KkU۫_)B)搡 Yq4-!gfwf׻,gg&̧lݳEH-K2ydc\}Kn_?}5fx,_>xF ""|+-'X7}Gُ&?`3%G͇Chgl\Xq@g=%RОD|bwJL0'ӊ:T{@`@5;(S/GBJ/D䣴H=1q+W.QBaEZ8 +M]sy0MM,06=`:g;@V/t'Y7߇~='Vk)$I|30l{`:M 'HJSǸb"8M5i >x#V4 ΁pj;䷨o|d{R `x8ܻm!X1c슁 6|8YpL 1&}uz`i{gdjz?G믅2 !Y41QL&=zhţ%FjŠ^E1҉u)X!KS9w8Ű-R'ebcNOL f`~2HDW:HIz}+c}t=3~.Hu6W#nP`𰳴m*xT2_ r_!Le4)̵tQsTIHOJqNa{++}Jܬ=; Dpl\Ex);r9ŚԔ+rXEh>MNgч -j55 l2v]?{l5 R5 HNiLS`>~k~+"(`oW@3`MJ{d_mFbܤ%(,iŴՁ>[ 9ch:|E1jJݤg+yj3 7" '"i\*WQ5;W^Z/gXhA\*~V3Y|B Nْ6b >08h_vb &' QLEj,n, Ul|YPCe]?W jERPSKKV<dǸШ,FRÿv{ <eHoGtٙqS53G_r|tt!jr8m@<[FAT5/DoC+dG`'(s-54@:O@z\^ q%߀?)AGsN!%[~#H\PZ%ZĊϯG4X-Z>o6Rce1e\d?i OaѺqi?@7Ccz5SVcϵf6+eVBc9{< үV;[TѮ?\1JHD2AaIv9b&..{L5jtNGPyPIP͡{jQ!]bsuX.)swi0͛x³K̹K)_[v-vK@|Js7V/NNl< viPs:uO~=u~4@fgJI8:+'p8=5. M)Y2׭KTQt~k\FS%cx'`O`/ RyF^%m'Y}gP+D-Bt2KjC:0G毕GBjGɐ[WHge[:ZNztje*Z RղVZXwrK9*E-]'Uӱ6>19w跆uBdSua#jAmJ^DdC8Z7F`VnxIKYys R2$j."PT]D~,EEZ5 {\]`qj[4bwIy0( .r]T~`xu!)8k.>$t .>z }.>z/P\)+;p!WEx,Vo߿%7f끯/~x 01'N sN1NZd(e4;E^ѭM`@6M? cw2f𰘽]Q 벟kowɧ x4fjbߟLM0z7Rv/_ęalݳY]Lz7Df#`tU[rA?b/#chV#?~`2-g;7}Gُ&?`3%͇Chgl:3͏X+)4Rr%o޵>q#wZu-ɹI\qr䒔59|&M3q9h46ab'S ttێ{v7C di`'3_Up1L~]t fbEߛaKDe =E"aq̕KDxˮ-@@+LG9*rSi_dstAat߱$j) 3Z1iT7Ⱥb1 ^a*~4uI Dz}#uXbaNf-Li&$,Gv_:K]Ts-ti<)4}z6S]Y-2;աY"Z#SwfwL/טy7ף?פX?hy΁YgsnU|77fٮo:O=_]w-| "i8ZG(t6KjƆhNm&F۟C~1T6~C alӭC{/!J͈jDGʿŵZo0]+`J`q{ JE"0&6D2׉@t^Q͘uyzxL.Fly.v_\c K.w8_& )A-bX+ Wi }t׈-#k7| P~`1)N~FT[u7^99%\ʄnZvny C HTEb4 $E6ſRHgJc&s`Z՝T@1\{RU8vp.{INz"JlTn3Q/r@nF4S,_"})h<.10//ڪk[ .ANČ' ;ZE$*)n rWjۋkg /r=BW!labz8d{8;Y-rbhm{AzuFpҧ6ኄX8j ,DL /CBai{1wmcbOa[Qp6!FHei%$LĄEDzF'H(rrلqiNЀR'z ]E՘B~\X%O74ߤs-V8eB텣O΂ =ϳIyG' 2AC r4I>?cR[A]# h%ÖTo1PTuٚnCNVbZ/ $Hheӹ ;B40f`qFO'܁``Hc_,L_lMX{\AL9b]gTJqP7X1K6<"d/)ht&4PeB ?>,zz {΍n4u3_ ~,wA4iJ;"cP$Ku(,I0Ɍq4!Nǝjާ^MD\TYemT :Y;*w^ZåtS@&/9*^oTꈽ$F#Nqir$&Z0i$g^KCFZYSyX*TXr$6(5Lx};MC sl2.Q= }AeD b'mx?@$#q@S$H8:NSZVkb$"Ɩ($ HH"cͿ[SE7bXb)1HV ȳLGރ"Gx̵ca17W۫ȁjQ*TMI%PN[ֺ1NeH kEFZ=4ڏw/,_,Ⱋ%4;ilDALOt t yӴW/G~^DDpM_u~X`/`|Ox'}g`o!;G~0c p'<kz(gz}5߈%d7-Uח#5Y>G2"//\Mo!p7wLi ބLdfN#_{WolfWfnI EWUWXL|ȺBu%3 ``?b&V(k1J(lU^{,c|0+۲ Kݸ9YˮYX A5uZ' n);'v4$/SR&M31iDs%ŖIavزI3lѦDY)T4PQT8AgGJڤkq4FƸe 92LWHnTGe# 1T}?yӹ;M-qE f$Ă+eR!D3Wt oQuFYORsdqX$1;# AɘaVh&jN߻[+p[)υ {4fX2K(21s  u++Llf!.$5p@cj\|P?z4Qj{Ü K"-1Ɩi 3ʵQDMcIQ.>&֨rΗpg4!9ʼn# Ԃ ^ah"|X BI{HB' 0$Vz콶XIqDQ= &:(Iݳ6ޙjK%SW'J.}*Q{H}0h0e;T-N|̓׭2m~ڌ >IB:K8a3GVΑe_ f@YКbeÚF_1ͥz5fs|NJ\T<+s_Ћ?XC,/ėخʆq4_P%ϟ^Ǡ"`Z?@.SE3'6' 8$D3u* =|jGG3|h _@NzAO„_E[c!zΛ$š(yaf?X_dR;]Ӣ I3(bHqD*PRīTē*'6`C<`=tVN8HA?ٟ9 QtSCŵC*s) 7HX9fw˭*@(W[sB}hES! ќI%S /&r lw|lVPҘfF*ɴF;Z=B<\t\+:nc4]_V)`t?q 36Kn—EqLe\D^(s]J;Jt 2N*oPo"P+p^ (n^eSUْɕ]aԔjtYk-U>T  ı ښa&1g3G~tӺ`^n@)ŀMA~ժїaqZ}!t գ؋gq=KK!vwY8,9t_iu:]ZP|dnr)o2kx9[*}U6mq-Ѱ3dlyk#m}~{t_V ZC5 $%T&ișnF%9]ٲl >d3XM{vg_`PO6n}4fX8'=m p,9 OOh4) R&eRкresBgͦt=z hUp ⩏>ǣ|ytRL\X_#_ rv~R󟹮?ܤ[\Q F:qں΂/X$&K !o~Y&P)Z,Ղ^^lKo.3 ~LH!^P3s%ŐC.)PB(y*9! , SrBQ!V ,J\pP Q%Xʤ8 L$ &PƽZ Jq]i#9"h~5y8e!D!"i$ s! -C4d!NQShԕA7R-9,>/4H2ZXT/3^ծK mK7 _ XMeL6 QZ.Voέ7Rv?0E]FZ@o44wohQ.$t%rNk[>j78U5P\0*v0c_lAB1 @F$h3HQQ["Wı'%-0{b^'ίC jWd4L"<˓N:[@c;;M;ٍ36&ݤe-O kc[ߦD&yP8ydF|<ݔDSK\l?zuG7ҝڄ~O'\Y,Y-#!/\Dk%nd$`bPFtQź('s|!u@{2)PdI6Ig m \ 0rjc/K޲'})g/M;ﳸYըDiz iWo8˛uO*7'voX<1)LIL3i5u5ɭf&>/vv?j(~xk-+W*ͺY'*G[|Suo.o^>[M\<ճI=[o?t2 M'_z-_ us5ޟvos/rq~0Tc:!e[,dl2"˜hFy'-ZeѢ h-AV`l /uY%h? Tf]# ҥ\ɪym oUl9|~hIޓHjMMlՐ <")rk-R⹫NZv -R;uH1 ) f#>z U(NqMS_<3ހyZ5:*CKO.NŅAzZ єPD[434I$1a^MRM҃竩t4aU*R@E#,C(;  H豰UZy]g[e!zNp)W[Ay9cU>gq]ɝKFR]B‚Kc35k%yA{3 ד 'yzaC+2A >;ah5Q-1EN1+ ae*0dv%gK6]Ԫ >)P#G.mo~iHYyTˆzNߖT@Xj)-4 JȝrLqEߚpp,۟' 0EX;Ri(%@k-vr‰!-@ *t*B dDX)d\ZQ`C! 3nJ7mݫ%n wa]|I+]uw }L]53nC Xc` 40" <5   U B*D-v?D-|~&Q2wݮNⰿ[gGn tJD$XOsuW9j7(SDke BIkCcE@ R>mWSl̻sSuGŒhʠ q?2i r&hKA @XքOTY !P{@n̙!r{8!*U-x37m['w:$4vDF\ V~0?zs MZx%  ( 4qGcȜd0nrJUVz!ckOv~MwZ:$ء𵇽6j^r>ِ!$^z3Jpb̓p0R" PDK±V ) &@Xa)C)mȸqJoXJ9' L ǩ-$taHR!A-FsD„&|BXF,KtL^LZٶ_ _]؅U#B`+$z%9WnIMaitRȤvJIPH4$(dHnQ P7A /lU ~m+φr3^T VÒ= `4SQOh*vQOpŶP.F܌Qq'rldvI1]DatT" ^LIn"v(oz<>%;%Ei1\鸻fNM-gY䮗\#՟p&u#Y%VCнP߭_fe;q}~Tuqͮ Dh[@bU\ˢ bfav{?n@[Gvhƣh8?+JqA Ă6sj~ZN x E,;cԥYrEN_ ^m%kK3I(UۑwIt&{0VM(B%Bk0(PJk-D[[F&G홇3:珂d.6w6{!5թ I0[Z-Z/* <|?;Gs_e~4}]~|&l@8 Th65u@_Jg.᭝d ߦs|M d_wM4[*]h;nyj-´F5sWN߻/:~n:t۱l!UuxvqSSIX{Y<ݓE Liir(V;ta*ZSYRnGk@b@oCڋ:jAp[TE'E ]X$ZHpF)ƾ&A:9va4}scfݤvC(±fKcMϜ'y|wn:;r ǣaf0NN zi.̹6?3pVW)} Bvڞr?軽 8=.BS!j4H R KY((iey?N]8aW&v]:{xAlG$+2: gG~;R$.9z KFWQvQnxaFdKHzn" ^j%DMmiU]cZesLmykŤ渴ƖY0 SHЦ*pR$pzTT2;;Ho&kFGh4 #d#iԽ>nv߆7^L G\2Brx?zc2QW~$&tc5#d':|T 5Ry(+OB=O2Y\fI[Z؄Cɦ0=, 㛤4Oj\]H WdZ4N< 'm̠ݯpǣKpͳ,;ewCV6`']:H!qFMm$3tyi=EϑSWk/"=HA"Rch4*pI\*ed:tT s].\2Ua f!r䒟🾈k-e&gY>Iƙ:AКDžWÇ$}YNg tx~s*`*H֡I 8{yPi岟N&%"s8f]eQf8/C Gq!y/(a`'@8f}$^2ekz^HRNaƬ'H2vd5cz`l#/!G DA8S Dd[ɍR`Kt+p;<>J%"T%?=.؏! s \#>8s+!T"H*&j lq/R:WUq30$W=QZ k9|*Y)8ѭ :K2`ͅZ35!a-DhtdäҀPs7fF]~>js!&N?M<gQ3 ]*ev+C4ܲ$.iF9L* ]kF+| 61 Xؾq'I#QDؾ$%Q)dSMlE}O)H㡙fh &z.lv.daOpƴfyPf5!59hX&da>v{Z$ު@}g \vlVKƁ %km+΅ ]6(9]SjNP%mGXIh_z~,F.2H5IB!TB^@Ąd:0S0pl(9&bv/e?1mpN5#nq3vK,|~>hMb@~#ыB8'M̓A&@Q;E&Wq0Na |Zk FNI?ËU HN8 8~rs;,'<6[("H E4 tsVsXp *%/L̩V9l5u&|& e!Ʋʏ6h?HSάHmjqw`/&Mi`lf_k6p$,IUq†EJn( Աܑ;>6h˅0B U}j6 8aNuM_jKB&-$4LIGDFpJO-1"iyG:]1N,{zz@(U\eXdoDbVyҗeV2F`(XA16EBDP ܱbe%\$ ] /?g=I@DLNknSNvc̥kj7 Kmd0U$ \%7 (϶Ha~}xzGTzRMFM],taMQ{:Z%9uUZyt./gjnl}6hm ȃX'WG!Ik  N 1py`|pu%+1-K,7Z|</6}Noz3~ޥ`3<tQ [BpBP;2=F@R$4 T h8~&dS@dl +R(6Xmsr%@A9,˟ $/"RJE&0uA}0oSY:XNoԣ0h(G03r5:P2#(B(Da2DVf9]kJ*n{Lۜn ¨b۶N'iBz{KвfRRYBtO:@@9X -E/ؗ]SNMگ0+~mi:oWQRǟB1yNe|,>X|,xbx?LhuGc(Un#{wy!\^aKw+hvʟjWx@CKMErY&lMO! P<}7@:U14" `V1V-t3TM 0l>N7l$A:jh>KRxM|Nc플 6+؁DO-F%|_}|`z6(+lD(qTTf5q4jtYt J5Jp@5!G@]Y퍩<$UmK y!CY^1IoaFׅ}Eٺic(VWFcp$!#i9 !` ɩ#.qh_S|-p #1f4h8܏ݹ4ڎL}d #9 K(#qa#M>F@[k njsh'QF.>z 8VTƁSrAI1Iy dn LtƽD& JQn)$l2A9STS0se{T{eͪ0rUN's 4e= uׇJINۏ]+HVLLـ"` hǡ¼ 9#/W@8AFFpn:(@;o G1n:N&9nS) V{C4xش܄8%r>2G! "EIԅR&!T.߻\Lg%5eMDrh Ha۸h#Bqv"T2q U7ۖ; G Voթ(܍z& , 4htJTbU}Qihj:Ej$X*o\-ӉݐH0_z9y2`ܪlΤsKk7޲qmT0|!3F.W}dcN$ջ$ FVkE v{O3+ `sG}>.E8"qdMf24R+g6/?FXZ/U{ +EUa>_LU|[j-iOx4_)DZk6>=$vUbK> E~B\G} ĈSvt8cܸ։U.h}݋]iaBӀY wA7j6c2]*zͷi?k7s_]PrȪ՚Ү" Efhۀ)-z]P~V4S]W3}&ܡ6D?ĿaFe$yM̲\@ScBM,~:ӲH0 5I㒊U7KYN6UY|or^пWO.^4w<~t^ԲѰN&TǹVV~ShS+:9Cz&gѳKQ 'T;Yf|E"lSf(ڀlBbO-a"?-Õ7I:ax½,W87m x6 pyyF[~9_/jvWd8c3@p"02(= oЎy0~l ,}.ϠrdhٛQ6xf6gSw~.`a'`Ҝ\0;WX_3 cIЇ*m7nƳCGv*L;+S4-<"vם?>"Pv_wQ-3¿{{=7%ϱZOLNdg|L{Z$H֣of轍uK} WzC7;D%rc<}1qշ~w{e?]^r%^)Τya/{!vUߠ#Cї$73#np<፽}DkĄk.;p#,|IP;\[wv BZwC[ ۚZ݁ vA*pYTG7:SGov1A`ЍPtNlgXy׆0R?ur^}Gr<]/5qnnD`yO".j=QqVBiѣ2i2 b9U5'~})rl˜<JՔk0ܱ%iwy3J#DaWb?]wwUAQ=H}m%a}a[XҿndyRob1n:";y_a|{*01.`N,1r "nX{teQ(V4$Р\AICw%I58\&ظ"}`b~܇l';Փbb[Rl#QM>IyAC&OَlbF6Zr&:ͳ7K\j,Ao-5"◕\D"wg=]YG. g52;Bʤ]]ߢUܡof͚6kۦf/o/'{mMʤ Sm2nt2)8K@ dPx*~L͟gr _.[/?]-_k YȽkkQ،9g@~ج Ҁt8酱 T Nl0^zM-3쇎¹_ ^#lSGbMzc3c"ޯ(םhPenh. o}x-%%ׅуKr4&Gt,Q4-у}N1\|#;ƍwEWmZsB]0h+3$fQ#)f\Pk؈"8Hvm7֤0̀?-3jwkٵjV+:;hO %9 ĔhkE?F5i/ #-1p6߱C/!0BC1+M~c?FjBS$,5E Ańcx*y!p?"-9h[FGC${)#,^9͜2)dhJ47yfpY'QAΪ *NDt|bĿ\{рğDUP9`9ee$A+ 5QEӡZ Q~}*e1hekoY4zEy@+;O% 7U}ZP=AW N8Mq{c 4e4Itq<X1QaQUJ4D}S"ixǩ,x[~zU6_rd 9pOjsLj'܈== c:,g_֭Ԁe(`qQ_7}r8@(t,g+֛ ݣü>&nIϠ]'*4&/82<%;UĀS<`B$!7!olGñij~}8Z}[F^;W&|Nɷ.lZgik6y߳nߕ Ɏw< d9y_N/]:x FLS6y;rոw4vtc44[?cfTOv%hlx-}J7-\cAd;M-/^r_4R+ެAE>0{(ޏ4+#Y">8J{ڳl'fO-lS4xRO~yZ68vk%cfb.S ѕ.DW4x薯>ĸh[*|Xh%)-CNZv*CUXHh(e=b-itZr@+|iC02g( QcэÀ0űH񒝏ܕn1,cZ h.H(|i /QE =v-_|mr}cDVoU8q b5&_~oR[hþ J$Pq q!Wkė*J30#uqS 1rM"E#ǝN[{|$3G+yNzAG\%##lKn-AŬ0݉lЖR։ W߂kEۧRB-}RbF &!WGgO2F!!IS@kAf41PɱB&Lje 3!JE-& <:3hծМS]"<7ɚ6iPk l ( RŬͪf0Jwߋx7'T/U*Wz<{r([ 1hf;HG/reuv4.coV{?h,NrWFrd{0]n]}Y+e UWdwgj|4zd1/7>6Wb~u_)<i.zW/.owwA;3]䇇x7(vy7M/ʔ5z!pU,o2;5_(k hQ\4 V˃թ}Gv@(.]TZjݺА'5:muӄܩEIw:w:.B5k{?w%ܩ y*ZS&l-᳖J47ZJuƽZ0 d21DAVP,)ɢAT~5To6fm̂W(XVYid,x5$Xi y*ZS 8E%S6~볪 -n]hW*ԂU:o#+(P<;=JoD AaPd~CKzcH^104ZoQHH`0/93&x`ԒuQb֌80A"IT> 17')SP]Ȝф w A{dŬkBx)٘5(kAљq;Z5-AŇ+k.-k@CVo"ՓlX7<%%S6ʴԷu&Ժu!O\Etʌ?ܰny[-JTm1w`èjBK[UJԡE[@Z&pYVH"k6R~J ɗz0<I@Eewb%DT◟^- *@% b#s{2ę1hn~6P}Κ́.ثo{6, /٠:_UЇ,x%XNҐY%xXEu7II ;^oF?/>{.ky~z*DpTHm`g)' tr*锓+pptH@dzyרANAQUZwlU8;*Maw١XJ ';΢xJð[i7AdZ(nC*'{v-١8䅳h+R=eF ؤ;:_4⓽8䅳yJ("/2g0?ތsu9^~?נ-8^ʇ_ӓWx^-^*'AZJȮ/A+tRu |_Bez}7o$W2sl iz f Q}<~%l]z`U~̖i"hв&J04E zV\s: Y<2"K !12(82#duHhSIVnRn|~(k'5%mN~d0q¿(9R2 PӾCſ 1ֈ6 4A<=nN=L?"%%=D`||^];08>Y*%l5c%s`DƏ| cKr$F^Qk%mn @,';qs=nx Jnln=# s%uާ%Mu ڇ5cH](f=衹]CRbwSa:wStZeM:]}K/ߚ4(5nTiHs!gaA,C;}D1xDK#u?m{8~TSq Ap$7eEqw8{ /v0_ݺ0<4mKeM<]]^JToLD?B]n*XO^~P_)cVm4w Yo[Utࢿ캤E»[5ᜯ4 \ fVZĚ~I f9i'##:^'O%1ZDY&*7˗w,-"Іw<}b1Z>XKuB>R4;uKy5/99" (HlhT" Grr}#K1/dט?jmKYŴwאԹ`_[IJpjr%Z' g;.,T 4xͱlNOIs.!w@ܥAǯOm \iJc7:!y2.8+AQ-HSq z%Q_≆Tлa)"f D'@,"CN#YSBfG/Kj[.-C"EY3J؎Gw*#ؽel^V[JEj|a8jBj״qr`T( t?s6n~,\IoR$EYK6l2iT2\L"8ʜt+MrʱSv ߖ64ڠ6J($#*Y,?ɠcu p6o~%э*F e5tYVCgM5XOGR. ~"5iMu_$Fo_j&&Vm_zh*Xjڙֿ-W?喓K! YY Q.FZMA?V2`KUy5%4Iz .?1:JG+GՑ{Q>syb6wz Sw6.ѰJ&JұڙJ P %~HE z<.jV<0_U B䙒p>/16|PAlͅ)πnAoQϔVF`u<9 m;' ߇8Y-wu$úzzcA7A%&\}̷!by&{Hh[s_ẉ%wi1zXR4>$w4E&=JT"Y #&J/QEHFHd'yE D_cm7B߯'/Bj=Q5*l@WanCF'ܗCgH-e$E8wU9-4Q,@RPB;A b'r(P=ś%52E(ʦm;Bߖ75Ws9>YAβrAWҀ+{BGc[ dܞ[gd:渊5}krk( 5RNhA3U3R *  ZQv6Rd)0" PElI8q}bnZς\AC,u(2B\fGQ޷ы)`eA#G B'mC+0>3r8%KwM>,cr>,cΚf?F</ē'Дyg$p`4U`-u^+D:_\E۾ CejFB$3TxxIAcI"XM8q҅U<:P[A^_B{;nmV@KpPvkl'ÕѶA]dn]/ۍ4_dx r鳘-Bu% D6ed*:Q* .`( ZeT(),TَGcIQF T+BJQȹr%NHjH@[FvhDyh({fYȅD"QH̓HαiL j1l@!gmCDP*Y_[+lQ[]S;D$/קiCPvUbW)⤸euf, 2$D\W#Dž;̮YrM8,檐J/1=BGW$~>%Ֆ(/ M 6&*x<&(ȊnCݸtEKZovBOӖ:70jW :B -%:1>Rt,2F.F0(툍!NolֱQ%GbEǯs# Tw#DynF^FFq# q5#h@C#H1 uukBeGBjbp-g CdP4z@oȼGH1PXmЈazY64Fs80А|"ZGKI<.Z Ȇ&htZPٕb7Sn p|6W#`N1j8e C{\Ṋ81k !\"dWT4D!N ;ԮYx +M!!ZZ+N6(0g tm)!#Cm8IĚNE1%F "C78"hE-h(gfĮ3rP($7?7ɞ@n\@ѷf DzUt\s9IWfj:/K %EjXFuk KRd/<>Je0hrw킮&?}{\F F{ DBN谨1(. 1}Bwh RXhBGjky  =@ab;`TJ;kMUoTX%vK=R3L!β3/P mdj*(Ez7AywmG^We~zMhн`dmv-kר{+ydm@w3eI%fKUfR{E$7_786QuիrasaVh O5OCIn|J,$˥'נ\CzY!05gInH{4Ony϶|vQs ʚF럏F&(n])Jq'sVhF?й[{5;ƒ=ggj{ɑ_%Zi78-v// cȲO=俧ؒ喬fwķaC7YCbUHq7w)=.6{X+,%\yV0fqa*%y~2גa v-y6;Mղ["}Q|$gu4Sn9E1@\7*ĝlEz }LSlɅ5S !'Z凼]mVHզ 7Hkv:/6wr܎e6C6?V=|XA?vӟY>@ij1 Zi|iNPhdg(F- a^#+ tap( r"Ш^`߶3%84:R`&(Ӟh ֪r_)BB1KCx^H[i&6pƍ;w0c:MD0 El3p)qC*QvT/=p8p6Z}|pǭNg&bQ-(YNGXTeK| >F㑏Z>"7(SLpiˇu=lKoﮯ1szHw30ώ[T+?ֆ}ēa.& T{JJkU 3!g!{\PD0讂qK * Ԏ=`6կ.x"|˫;saxe01wn}S9*}Yކ]P,2N;E F} +4TҎ/i\R!RԲ)(BH8ރU|?-!l_J¢zM^S1~;ww™[oO< -OgYwp!4go^]d,yA=q#cGuD_iT4(xE|k:S;[!*!V?;GqD ]=Oiwn.Sr%fs۽kڀ})s2%ǝ $З3J͈eΕNz=F8$ڒ$ӓB*]^A C9e ^P l1}1ʇbyNȞݱtp1K—WKBQ$mIFqtcDQ[|V\p|Q*偍= |6Q#H?6n0+w;R뮶0i2tRMqpɽKJ$E%2\0$ yFbm"@(3W^^o/a68;vkqp&ʛe2ZlΓ(@r.4E57l$.I1SF<c 3Lqe0 ͚+=z}PMk:M㰒pڱ<D֟>[_]\w$=xXUdSXŸ ).Y0-,GLT~!) 04LtT$7!a4zso9 m%YpK~-s&q©7K#De/z]a6*VĢ=u`:IH5[*]|,YƢbBR1b5!c&Xy̽34b HKa<¡v*:]+ 1@g8 +2J=)VZWR1e//K0~9D(bdq30 p&/1Vm8nMzЄ.p*Q e cwn.Ↄݏ$wO(#DfI0CRc50\ s2 1li8+c%6ARHj$eF2 JG.IđS*9v^,>N#%⤴j*S]H? .)h)7.82k*2NE\n<lK0,fNWٽW9pN""\08"&4fsqLYw<#.4O'Z1L_~s PVKk74s% Tq?sBYLC6Q V"`"Ȓܲ%Dz,zTuL9'~ 7IZIZ& gf7"]&aM&FBcuyyFocʐcLsN'TFϊSJ;Zn `;n\KOג7wZA1Ԙ;ԝ4 L7՟ꮁf![rB!kt Uje=pfsBzpdj '8JHRoLk_c=6~d2{pWrp༞FO-Ղ0.K2qmK-#e! )7`in \50ouwp*(DaCtU-$0z%zK/޷˜󫥁}!)p r 1pVDuG$#HPE@͵vIwl85™X|7|QL^}H 'A>7ޭAS 9 ]X`Y+s;Ogf,Y_㫗& >r|}{.V35I︻ J1c>ˮ\.dJ OE"rW/KaY//wbHВt|:Վ!)EUqjt z I$`N)5+P{? 6]Ō Jqc>5{I(2ķs'k@G!vӘNcK- M X@OPcaX,L #cKʨ'ld:L fL鰎Zq5e~_ ;vQzKkMs+[OHuC!gѬ,͗ݼf[40eWέ{SS J3ϟ2sw4w)+Ȟ#e֋mT=w:2تffvʥOO`Y7p닇boOb2nsUFק 3rrZӀ)\Ce.?raUa2_2̢?S i h܃\4Irz~-A܄3bbbTY|aN&+ir9q'0瓺HOlI=S(+IA#ktc')å fZY<3R#R[y;˄7,p>}۞x]}}x[?PM¹ICX^5CCw>=iڟ2x_2 ʶ?>iNigB7sm7 bPt o\/yVyB{elsz%  xF()o/“ërLpNta>edLĦ³ǦZ=a\Nqj[qNdMS詺'Q=%O FᏢu(8 wY :c)̽uh8 NJ "[aEdv!>_ ^hYbN> [צ#g~N88 c$ӎ P@8flXVtE7PWND_FpY-EV`z8hމ`uIP?!.|0HppΩ^ 9zh3xͳ_NП}# n~{.kQf"t+zC"顷W ={Eϙa{CM zǽ>Hrs5U;A)ؓEMӍkf f7=H6bO^7wšN9ړ Tdj_AV͞l) Z v<5E xFC@/t=ڏ}g5C1O۩'K{`cǕ"c7lO3=`1e{?^}nq"f.`Wz,xpZ_@_5~v_v /!Q[bl-ޢ/*j1\7VCW4l\m)>kdbOi eᡯ; [I~^^e?ܬ--eWZ0m|7O$r22L{Z 靻toZy87w(z$Vb_Yz.Wjqޔ|eo4M^~y+V筈ϻ"@X̯PG@&ZMR:8sNr<mAd5qv6nW*/͌ɢf7z>{rL1Edgi}7RY9[M%}; !v|ٶvepj/nqJoR>2!^:(5!-|C?e觓Je4|lS>Q:-<}4oߘ?;ژpX?<hŒ`޲w2c>QJB0њH UR8Js Y@;oAk1{cX5QN墄 8G1Zg :$"ƬT4}.-wBIq*IɇwcA1;)1 Me`y%M=%iU Ӓ3 !y9&?jCNKe94Dz.?c`I8p9'fLI9ָ gHy 8)amdFbAhEYeTS$b셎*0H[R~uPhhX.ЅUmj)Z^Ib6uK wılR #}5j7vEbe ]@^g%(N|r)N]*JG~N4­e|pw=XK~˴x߮DU>=7™.?~_HZX[cī>vs^jqHD0'(w%g -NL svްYm΅[>߹C/7vP3ٻigmˀĺݏyc>X.doW@>j ؎.]}Rj/-^d\7$q=_[>~Eo!;ƯjwJ*cxaJa5d:GMWSzmx3J{0V~HwBC\Etߍy$ObKu ﱕwm)лА#Wf֎"&ٮ̬IS3k56%HͦYZ`ͬڠ3ktͬڰ4̚D05V3kZ"^fMq5V3k,z]fh ˷j_6EϮ^Kex˅99[p}9andW))Ϣx m t}l "}WonUn.hU8YƟrX1g߈$duUg6cf=k){FW.1 d w&#]bAKRj! (5*_O %!%, !`{f$'gL QgKsIQhs))keg.,FZ 4{"{pM'^qQq@Iޣ4<䜏Q=\kt)j!f8 K0.*M>{Bτ94=kI"-wҽIowޫx&1`eqAhhqNcDn(y 9A Ef^զĸv^=_xӒ@.DQ\f?—KU2iٻmeU8ǒ~^߹ܤΝ:,)ɔb'WR5D"!rw]`/LIɾRJRjUB398&7R8\I )V IC;X$Dژ+LMB!PZE4d&)Y-&Cf#C[F D](5 c" bYic1+%y"12Ac(Q~"Z %Vb-5)KJ-IYVD5zs9v-I~JyD@pJ|DR:N).N(#;ε\&R-֎@Of#6Zq> wR\R")JoI<$X+A7ϓzTcBywe.Sh]\:݆k fq&,m[5*P/g4(nE=C/A_X̌7`hKH#^9A kbbi>.zڡ]sz\ ?i.\r"$+vul% RF5^ ތFǩ" / s(RDC4դ2#a^<"L_. Uql>z;K):$e5dEkX ΡoczY:\ʿ!xǏkZAѓn?FD.1٘ȃ.yG_7mf\y^VB\ЙY=}h|o&ˋ›2+hNdh4[n^kOq򝯆ޖYD5x)zDOwW_݉^Ɨiǭ|/ON31lmut"w뭣x?_Bt&"*!lozv[Gq[$t/)C$>F^Ӈ`_̩Bf Z.0ۡ+bRWfg9 Ee)s!H%=Hˑ;V8$# ~8eEOq畷˚ C>yČ{y BM.16!(z)zq \ւv~Jl t=:h3$g{Q9r&(rTuv6DPBٕSCw)O ǾgFyj}ߣ+:U{v*a+";y8A ń&zP%Gǘyu@|ʔ t_|UG_!=9>ou#=7mܝ%-i5CE|A^.= 1+ &V̥消SuݲTC w8nB=.G?>ri=z1pqAp߿ O/4ant4N{OGb?i>E]Gނ.^=tTNv0s,'&4tgMŅp=044|~.p+t߻&7GEPݝH8.One,".7{q}8졡Y\_xsq{キ<<N)k9˂G_OGOb(ǾBjolgנMϠaI{mpbKMIH#Ty1?.>6COPvȹ`luGO^^d{pޕ8{5q ȱFꯃ T`2S3Ta ]Ndh8|V}:z2->?+~OЭ qNOe*=ȕlzwO7hsBκﷁpaUsF_GV: +83e,6)ƷFXt{aߖ>\%#%UJ10i s ZðؖpЃ5Et~-wV%΁Rq-e#(,L[jDb= *>xkuHCߣlX)n1o*iі\gՠWqM܈ jk1UI;PƮB\+=װ֫Mjϕ^LBR17W`^zEx+*^GLK-v*XߜVY`xDWUwNb5#kČk%_ 9j^Z.b2Jt8-Z5 qgݻ\95TOxa*<@Hp#fb>8id %W@[y5H5։wwz$%3[bT=0ކ4V2Ȣ }5IېFqs*VW=m[ d)Ta$MŻ[BGN<2S, ,KWbr,-rV~,w ^[+UI{^ V#FV_,$;؋~^INp#)r[{kݪ5ݑZ{,LRm6k '\z~ܚ|dkWqƈq k ;'GC;@5az'W_d:-1LӭDŽ.YԀd9G2 GrxCLʳiM?@?^6Dб{\7dNj(:>8*>9vZ:yפI)N"xٸ!Q K'wpWxFM0*.LORq3 LE2G$ˉ"SKn^Nq`ZhKmURxc~w2I>{6~ȄHą<\d \ a#uPƇ_MzJKvN& 4f3 - c_}ȔqbZR5f;ceH H5ANJ@% \[_Q7 <¢!퀡gc PͷLf$*?PkCR;-Ygh2ƃt0!KwG3R9F*IJlC BlN{| *W+G!Zo#UBu SN! 11CŨY`\zyMaJFTHL 9I $')$%H%ARXlh8K aqi1cʠbF6P1*]\!iG*$7tjVz2Z«o SM O 1ݾ 㗇I;d*ְ',HwBymvԇ&:#DJ4 G_Xv%afM^XHҴ7itSpۢcbJI%]pIJL$[[}GxD@iq)r2jR(%YSwt.Z1B91gcEAˣdG((< JRPL:lQ+:AЪʽ*2U)1\ kι;ըuD+_ ?&nBhg~D2X-dynpJ"]Y͖2֩J((x1FYy$JƱ>.}2nWx5%L F L/ͨw,Ao} WQN2J")Q7, $^uފV%W{'[σ壕ĜLA{ڇrEqhwȀd _q:9g$ן,6-'r1hE*! Cs!"5;$tu$VP! >)3?Zr`f}\e,*-tMk3 RNFVu Lm%ej^h.!3{ь͵z((B! /O7YU|wP փyl$*켃՛'䀗Vlg]&㓔t|OICE$ZP@_85Hr1I%iitK,Bx>*"**jQCEM;SBVwMJ}~*i |[L_Wd6,Cdg\ y.W9Q:p)IM+G aCB\zgrH{+9cdT ojeB$o.̳J*,LEAS쬄3*U%)Ñ&(! B0وL6rtumt ԟzu'gjJ9׾`KĊtZ)#"H1%4Kʑy|N }xக 7-1o8"bamd^'":  `OD; -9Q&3u[{ܵ^e0D+BtZ1i{r0bK1'AˇAB9FgML^?{^tqN;hӂ i5_޹7l[4q8NmcMGg&Ӯ; N FGzUߝ^ H9(VF-.޾zsz ,~_tf9Yt˞ |*PzR;[(ϝhzpL:3N=pdNmF_H{M ;랞Mީt|>7 3>kM;gfT.O_z9PW-o O/8Uwp(fRv>x:y|9$*>pp8O食4g_>:{1qWB~GyG( BIyT@683?H˟}=|;̴7fݻn'9IK ` $W'Y>N^ro5gɒ+^x{9ebWPP{nS((e" =Jã #g=FN.g@RcT-4DGYdЁ!)\t}{طaғkTL,:@Y,4N Lw[(QAnWF~| !_'ގ#] m3MYera/mLz$ H:bƵREA|${ F|d,btL5ZDtY*G M|2')w>L&7:_Vח?/YYΘ^_B* Cy9דglZ eg]'zxx2$f0EmJHSzV߅{cA ٟf4GLc$g7[m4(:h g=UZKlXPn5'ۂ)Id|]knj+jOxiIE%;Ohlrw'Zbtb9+`"Ui^>yTvbuR/ 0 dWK1pkCzhq.05} L>qQpu񊮱wh^1n`~gW[Ek?|/;7.Q*`j]gCqo iΔW !ܣp9)1L+cDB1p*?wi\͔֦E5Qb& JQ! Qa|Iè 1@DxK'nZ!k^"ivr\| OjVw)BGob5wLyc#Q`*q,1]H9tJ5 w"hEkAb$ |6tP|tDi!D鬧J1)hc0yG:}H91xΤT{ZE\s7w]d'!|ކR_X1I30+&E)]@c~[1f xd XZ\j(&Lbnk`W]&,U7itk͚Y-L]`wk!Rq9g0;#QS\O SUg)b(P%f,RpEM_G lv8xRٻm$nhTžަ/{"A֭d)z$Y_~ gF$8.[!k4-BѥqBjmiBHۺ`U-"bҞi ݷ҄I,# C -6b{ic Q)f7IRr-Cq$S1%(EKLKVdv`"m߈&]0X4} Ӥ/&R0WtN|SnEoPjkB'CM-:ĥO 4̥O6HAQ;$=z1^,h/t }^zNpK-iL}gSP+gPiƎK w~|E=+vܧ }>( $Cɢ/AQc2˲cFiQU)KfBP;B* Ajrxt>b ~5A" ;%@!kB(iS6(kG@c%`BKA,p0p  7BRr|`$D)X}"-28-rzh&Yտ|%NkPEw'M"regc[%-?`_eN|%)T̩iYd T)WYʃjw}2Lp.ӶӁ}}vmHTKOvY"kGCMKn5ZkFnϼhlD#(@!gVvI! f~q9B!O;?r8qWe%^J֕M%UeiyȈbFu^3Rѽr;nE{1?v7hʘ<{!2lp*+&*SX "}ˀ!ZhxIvMjT e4/eCBARAQ֬%0DryF.+>B|ߖ BA^,k]J)v|mP h '%K,5 )J&,B/e,EH哈,E0B2uFF<)cq-Akny! ] PaZf8xu SY.}>Kp}7`$ *GODD6 TBC+BV.d}O|!]VLY-XH:2ȰQQJ7I(ym[~ ㅎE&QT<0seCg_\لZФ!M舷(declص0f"01 ,0 ZpuU [/mbпlpdˁpa c.ue%L>ѽBr3~ʆ``rJlD)ƕT o Z7tډsP'umJ͠FWUւj #".04NHleQZ߮ 3x:֤2'(ٟb>%Z:i率Ȓ B,UYY1tj%"* B ZcPqFn1EY=cY50t;xr0ۙۗ% ݗ Wh#G>*.0eäBPLYu`lxBkpGI],Ĥcd44om p:+J%v!Y.2'F{kuńuC;8`, īV49_Jg6kmT:;aJ t* :g(8) RBn_^!20z\cqVzW+F6qMcLUy|PˏQRb\[z:BE2=_V6H?W^Gl06_Od8ZGs ׁqxt$ C~Ka@ɴPܦ^Kǚ)BMY1ﷲw4GGcCx+[4!{)4c@0>ŬJ>8ܷJH"}0H ^XRCKZ˴r*KK%\{ ~.\[Fdh(k6P Ab΍FӹRW+j؝2Oi ;Uç٘Wae'K'M!i[5UPŪ,_92$oxVjF$E u9\' WUôcWN_MChAhD03sK~ @(5syQR)CP/Gй)}ClIezq8@| >BܭeJ_yYhXJOgf^ [JY㚺(k+S^oú,%r Et-܁jx|vȓida+i>3՞5B7ŸDJpX‹J3k%UZyrV2ز%3KD.$)͋`b4ڊ)C<^C4LiI<RYP(Wj﴿y;q .IrzNa Q ɒ:𮙸UQX0Iocgq,ű߱(.W; m9:!{鯮oߟѹ9KOL>m+-@LGO]Wu@]pG77={`KU{w!,*Dl´:oVyL֪w^A-O҆1`oeS<\ ʕerTXY ,|N{@2hﶾv?Ày74=,T$`\#{Mĵz Gɇ[ֿred02|nt%1a*dp3۔ZFa%:xrhW1ݑ`Y,6=hxxZ.7CHnY-?m.K_rǕ+sUPTpbjFF]j`cC $%*G*x,Is$% jYsMd]qU7Da- Ol4l$Ic{(Mn+Pj-L-_jDz@ ,kx`.Uc}Y6Z3l.}Ȓ9$a# ;Ɍ# L[fG0'#16=%ftH[p@T\FwPOJe`uȀQq?kiI1П0Fѯȟ;Xv;izRk<|'yg$*R{.n`C(04^uCA䡕˂k&#/qw2 h #kW.?科b|-m4^8.ͻnޝeoYw/ͻKyҼ{i޽}7HDLY)KВ44!࠻պG{dj?NY<%|DÜuHsx!]9꜄%&ǖ7ץA+GIa< >n܂%L%a(DMIqWn\vkg<;GX~9YVZ 7.E,%r%+bC.o:cEct+emE:8vWL(vlTg[ˋ+|̣//?'̗lߟnz?_x["lk4:]xz;(~G7ᇏ-Oo ,h>:ZCP;(nSsePDd\ꙭs؟+R( S1 vM@âO[kЏ̥e͜/Qd>wwIHdT;pq}1䶁u )IXmtɌܢ9e̡ݢ)CGJMcVVI߸JjK0_-x@dggZk#_> sق&q9xFKJkƬVA۸,:qDi}rvB߲yBd{!?|\+ wA߅DWNHUiqj؏PT[# Ї(I Ba;߆vCq-1;Fv;5Cvn"E4G[=!IWA FtrHn;ELL׺nCHўLEVdC 1ȅsLijk몲6Vl75)56 6E$ӗ+S~9H;}/Kxei67CT/uekP Bԍ[d"WvUGmA N 6lF,bmd=s Kp{QBLL7(|xS9=>\EK u0'VE*RTHs+RZ+%xDZaAX4PVEi, _*fo !/\PENVPCMVIyW֘ڡ))6#U؆*h5]I݌^R2Bkav %ė:lHi[nXD %Ė=DhA!$y&leH[0TRV6Zny'}EƝ@wbM[R62$ +LYRvxPE9]IX "@Gfˌ0r<6 l؂n) ~TueF&hªNOڧ?qL{ xS5~4 hp2<*OB`9k*%̑8rT̐R NKƸZq;m|ÓղHAK+jY쯴kYEVj.œniYiJc4tfHNCitX ڂi-" m>s.h mICˡ3c^wwhd6Q&<4{PeC IG{ւ`rRRc1_@o>`vִq+Ƭc20QVc$L Mc-|$䍋hL}u;v g9h\ĈN)֗8aL~kvBB޸fT'(nvSZvA䎑yPFɴ[~zQvBB޸d*2_?{D_aܔ]G2ssGU }OuI8PI6H`w<ԩZ* <2rF!LyWk')(_?;wg?Y$YF'WΪtINki1b>Y]7b=U] ɉVF z\agiSXՇr,Ѹ@n.dAkhKh,YvFnʻI=:Z!` w+ʾDZr[ .CZGq<0^|ud_b8P.HA XJkmv(X[<)#@` JVʴR2 ?Y\p!5Sڞ]Ϧ;!AѺ߲l C1~[0/EbeKQ7LcPkc6J^@>;698ttW?5_ >僨<@{֕aB\CBڹӋ:[tM*}/ 8_ ;8 .eBq{ $妞Q pZ#y ʓ1yMT8K;IIiXutKK)XM08կ-]ۮ7/W*iCFE0oQb-(ߦ|Tr Af4+NE2NCy/$y_9z1??$`* c07"(R`VԡquYK{yq1suKHbQbK[ц"1yjt2ڐ{W0>ekJ/!)ٕ ޜX*2|K 'S2zƒΆ\}&cԛ|Oc|< 1 l${LkJ@ 1\[UJոL' N]_tYr73m57caߚ2\ﱳb&C' Ȭbq=r|h'[p Li@҂OȒ wBFFIFPxgQP~S(}zܭx&˘UMH40! ֢ >P`pSZ0.nS7{xE t{$xkصS^s Hy93yTyU_d!!(` +/͂1M̀N;)cZxbB{)gG. Gr]L-:MQALKv2DJl-3sߚ%ԝaU(F*=@͍FRkyw8oj*}yǃesqVK5f]$jwXE&rDW$z²(~Ƚ; QrQBzBu//ӏ6faP!heiҎPJVL&oף.7=0Ed0 "X-N Zn RS0vD*e9 Vj\TH J(h6bCDoYQ&UX =&&_ /@L#)\-'2'hf\l 9L>dՈRpؘ䗫>䌰?!"IDXV?ٺ"[Wd+<)CS S+eZ)JV42)rtSDwNvJ~ G(& +A"Q+/$=>`CZڍYs'9BpWr8a t..(ᯣd9^|,ً* Q̣UwZjÊE^Y~/$eiUA\iÕ,-iu䎱rd|@{EPt.^A.\~:~:zW)|}+f>iL烳sѻ0u?Yt./ s_?u>=?>W٘9A9MB:atg0Ӌ'C;8K`T^L*L֑}.V! mË⤰Y<.Hޒ%&_-\}yk:w[}z>~1 ǫA#kӪޝ7M OD<VKoVzC_cWOedTyi ^V^pGz2̣|{{+p/NQs SHԺx>?a;nX}Cb >,La3?$fV= x<L9^K.=1gy<[|<?\a+*x1)_ x%o FaaV k%O&H>e[~~1-?XyzN%`ҧ*UdyfCGًQ)\3Ww2Ť9Ր5W^Is%dK`Z:;J([~>*=֐*isWGa5̴5,x<95R2 LJusS ԳlT&N0{+S%\tc|*2vHZk>f-kѝ; ^;Lpk2e k&xxPt+Mm+U bf"R,+5ӑOCPёELc8WU(Z8cQ X|ZN[Vr)A;"է^e~Bl2@a0ͣж|J5v lɢZIvu4]KF4xKa)Ra|2*"W噳]#Loʘz:S3IXBEX`L`R&LSZHJeG0]0`Lſ#$|~1p~N?=>C4R8Pօ kGD'u ;K1Qrz[7v^ROXt4PPn";<͐TɝjfjfH34O ڥ*o`Y]N+R,ίQ"%"8Y>~]#H@ę"Ƹ䞘Tn PñaT5udZ#'hq& R42LĉGV;',ڂz4/vx֌!Cm-&l̸xO %_,z=_h6[Ή`wmr/Ʉb 8%Ns馓7k]H@(इ٧ TjMEM|YAp1Fуjg`HK,'lI x^3zWp뱏23CyG.Fy1 9VAFb 24HIP=80RiFZi4PA0.LP) #h\(9&(ҊƠbDxB13) y&qXRwi7ڄnhoSmt۷G"WRuJ RK!2S)|?tKG9N%C2ӈx57v#|>BZS!"Y͵}-$Ӕîp4w#mt 8fqt6P@ "lDH2Xu+a'8,{4$B'%p@q#c@+.mj;"juro05Er`q36򞹻Gbvoeok{qxAX9KH,`޿9#l+O2Ԝ>$&OŌX,Ѱ'c7Ngq&.fQy`~ 4y_i#-%yi'!RWc2}܊`Ye2wHe }4 ȂTA\DL1c5'A@x/k]QD?|P(z?b'2**@N:86G!` EIT-.KBpTc`)kpJ^8)ZKA{Z7`\$ 1|4F:ΧaYka!pQe:G 2y.&:֪c:Lc4O2=H+4edQqO{꜔T`$6J22X8#h }H:[ d.H\o,3hnT`Έ]J (?8"y TXe.唷KSv8D>:G "]eED,9Pa)`É %.r N!z8N?{6俊?vvfMr@.,L#)Nlʉ/a8H A)$޳D2Qǔ(,:A1Z{.gA2ĝ D)-U74"]T"oF5d|Ont;d@BCL0P-0i 0cSF9Fǖ0ڢ)>9r1HLHM9oetp_vzB֕I !0eZyoFSq\ x'WBWJl( ; ;t 8PD_Rizlpi$y0M`MI1V`6pӦ%{ϯSܨv T>$*"z(e]}.Ӳujc)vAҐKLLT`]s+) QFdzNV$~aDX8wU2oR"W!mi*[[U=~OGn<*`lic3iQٗD![;OZ:E*{QMtVt-OrF &b(}]xߑssK5gBj갪Ċl=+Wͩ^fd5 밓KT$kQ֝\N.u'j\ < ^F nr}et]y.bCRIDT%*UZYj%2F$KJ) +T@fmOx1_P{m{ `588;1&:|m-y\)ϺMApBA U VCMtF]+rjMeŽfLxB46W9B[BJĨjcNM F尾cN]pbLo\t$o*5-$Z "j2q` XR<49RF0[T{fL}vp@6T.a8<1ZDW̊j!PO;b&ϴf 2$Eg"YWnLeXyLe:/SPe(IAF}-?qK3K>,VrMï߻qϑe?FwA۞C1qͯXx7~T"Ga>nY*Oǧ=8n&IQ2|8|}?ۀq68.>}oq_XGv;|szbp:-{,?kFrLG zڻ`_{[y #J!f~g0sçxpٸ|S~U_7T>^TV (Sw໢<|3OWx}>2vfMC-Kdƛwpd4(̲ԙͦ;ӕS< z^[=U` p Oo]v T[[l]SUxj<c MgS5̎ri<)0)~xblO^a2l!g/Oʹm>z2}r< #m΋GON./ ϜOgp8>zٳ* +޿|hr3ٴJ ky K~d$AZ 4SZopo &0^hPp@rvbqr{Қ]|T 3?w-yBG Dy׾ppS uO2pL<}=YCJxƨs+dpoYw7h .ukRv@O̹QSHQ7<'@`\k"+7f5謹õeBJ!9dM#)h?@,=27~WWb"nӀP_e٘9 (+ˌGyP&eNʞ}qf>gwSX,Joťeaw?ߍ]4 ,//ZSN5vXn^]l$I6l7cENg@z_K,L?L1J #DehB8/kV@df PX][ s幵C drF;~]R猖 l&ÌzRG Su3O*s+W0]$ӕ1wM,D}NY)ګt$:/^K-,xpB0[ }r(VP!jX!DI $N'<641 k0f4$@Ch uZ!\,M!Q-]4C_[E)dwЈ<k2T鷻0.֤EX3TbͲvH؞_moI{(SμACx)5MN(f J QV)Hxs/˪;;Pȝnf%3|#H9q%oZmSQ&g=`iy\{V' #lAn얐 ׊Ftީ&eTMZD5 7I3xd| 8'1J]&$F2՘iG)O !i2d%eBSi3\̒Xǂi$DQM$RaL!ik2 .ҧH̗5e|/%emP N>aCK4FBܱXn HMc-D! D# X ibKȐr1isğYMbD/|`wķ*$ʛ//)ή~F_~}rڅJO'<\{:y|@Q54{l:+>M1k*_N #!8>yeo>OCج|%DZ>*:K9F(׺y%"DS]\Y<3^?2$XDބa_#Tx}/þ`_H`~OP Dg.B1#A9,G\<}:o@QI؆ÛN,xpU$ՖMtP4'fԟ$|CY_-I*&[0_ᢁ)%}ƑX/5,}EjY .oT8 _{( a@`w7¦D=o;!@.ioڮ2 IOjQ}b[;zy(rvךwPT1ae=m<򻸣<6y|ntwVRET;7 >]}Z\OFKkhTQEiko.[Em8_%}&؈o=iЁ)|UJջ{ #@P%_q݈j.4ˮ%`ueӈWbR*1V[A}W)ήa݊Y[u+Bn4B^WJ ݴ(Cd`޻2\P=5{!a^XWOU[jvj'+ƀ2Jyq@3k9\ \"esl [PrF3L"$y>4 6l 26}p7Qe'~3ANXVk'}EQ>\VSQQaX}V;9$|wze0kںk> bP;)VGxA wٸ;}ļYp6h^Ȯoh{md\3TRa1$rZJ2IEYfݱ{i{lᚙ/5VŤ5T@V@_L|<5var/home/core/zuul-output/logs/kubelet.log0000644000000000000000001542005015136775655017716 0ustar rootrootJan 30 00:10:36 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 00:10:38 crc kubenswrapper[5117]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:38 crc kubenswrapper[5117]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 00:10:38 crc kubenswrapper[5117]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:38 crc kubenswrapper[5117]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:38 crc kubenswrapper[5117]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 00:10:38 crc kubenswrapper[5117]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.294010 5117 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350625 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350681 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350717 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350730 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350743 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350750 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350759 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350786 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350795 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350803 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350812 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350819 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350827 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350835 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350843 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350853 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350861 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350870 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350879 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350887 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350896 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350906 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350913 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350921 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350929 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350937 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350945 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350953 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350961 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350969 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350977 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350985 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.350993 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351000 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351009 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351017 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351025 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351032 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351039 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351047 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351059 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351069 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351080 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351089 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351097 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351105 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351113 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351120 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351128 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351137 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351145 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351154 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351162 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351169 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351176 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351183 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351191 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351198 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351205 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351212 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351219 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351226 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351233 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351240 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351247 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351254 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351261 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351269 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351276 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351283 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351291 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351298 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351306 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351315 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351323 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351331 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351338 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351346 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351356 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351364 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351371 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351379 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351387 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351397 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351406 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.351413 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352327 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352354 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352363 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352370 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352378 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352385 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352393 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352400 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352408 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352415 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352422 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352428 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352436 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352443 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352450 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352458 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352465 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352472 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352480 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352488 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352496 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352503 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352511 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352519 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352525 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352533 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352539 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352547 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352555 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352562 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352569 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352576 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352583 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352590 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352597 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352604 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352611 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352618 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352625 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352632 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352639 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352647 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352656 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352665 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352673 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352680 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352724 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352735 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352745 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352753 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352760 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352767 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352776 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352785 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352792 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352801 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352808 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352815 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352824 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352831 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352842 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352850 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352858 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352866 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352874 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352881 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352887 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352894 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352901 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352908 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352916 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352924 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352930 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352938 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352945 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352952 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352959 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352966 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352973 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352980 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352987 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.352994 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.353003 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.353011 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.353017 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.353026 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373293 5117 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373360 5117 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373379 5117 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373406 5117 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373420 5117 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373430 5117 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373442 5117 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373454 5117 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373463 5117 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373471 5117 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373481 5117 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373490 5117 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373499 5117 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373508 5117 flags.go:64] FLAG: --cgroup-root="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373517 5117 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373527 5117 flags.go:64] FLAG: --client-ca-file="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373536 5117 flags.go:64] FLAG: --cloud-config="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373545 5117 flags.go:64] FLAG: --cloud-provider="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373553 5117 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373572 5117 flags.go:64] FLAG: --cluster-domain="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373580 5117 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373589 5117 flags.go:64] FLAG: --config-dir="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373597 5117 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373606 5117 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373616 5117 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373625 5117 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373634 5117 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373642 5117 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373651 5117 flags.go:64] FLAG: --contention-profiling="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373659 5117 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373667 5117 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373676 5117 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373719 5117 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373733 5117 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373742 5117 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373750 5117 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373760 5117 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373771 5117 flags.go:64] FLAG: --enable-server="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373782 5117 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373796 5117 flags.go:64] FLAG: --event-burst="100" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373807 5117 flags.go:64] FLAG: --event-qps="50" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373818 5117 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373829 5117 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373837 5117 flags.go:64] FLAG: --eviction-hard="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373848 5117 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373856 5117 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373865 5117 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373873 5117 flags.go:64] FLAG: --eviction-soft="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373881 5117 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373896 5117 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373905 5117 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373913 5117 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373921 5117 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373929 5117 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373937 5117 flags.go:64] FLAG: --feature-gates="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373948 5117 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373956 5117 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373965 5117 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373973 5117 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373981 5117 flags.go:64] FLAG: --healthz-port="10248" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373989 5117 flags.go:64] FLAG: --help="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.373998 5117 flags.go:64] FLAG: --hostname-override="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374006 5117 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374014 5117 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374022 5117 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374032 5117 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374042 5117 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374051 5117 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374058 5117 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374066 5117 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374074 5117 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374083 5117 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374091 5117 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374099 5117 flags.go:64] FLAG: --kube-reserved="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374108 5117 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374115 5117 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374124 5117 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374132 5117 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374140 5117 flags.go:64] FLAG: --lock-file="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374148 5117 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374156 5117 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374169 5117 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374184 5117 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374192 5117 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374200 5117 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374208 5117 flags.go:64] FLAG: --logging-format="text" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374216 5117 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374225 5117 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374233 5117 flags.go:64] FLAG: --manifest-url="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374241 5117 flags.go:64] FLAG: --manifest-url-header="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374254 5117 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374262 5117 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374272 5117 flags.go:64] FLAG: --max-pods="110" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374281 5117 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374289 5117 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374297 5117 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374305 5117 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374313 5117 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374322 5117 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374331 5117 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374355 5117 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374364 5117 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374372 5117 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374380 5117 flags.go:64] FLAG: --pod-cidr="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374389 5117 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374405 5117 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374415 5117 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374424 5117 flags.go:64] FLAG: --pods-per-core="0" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374432 5117 flags.go:64] FLAG: --port="10250" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374441 5117 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374449 5117 flags.go:64] FLAG: --provider-id="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374457 5117 flags.go:64] FLAG: --qos-reserved="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374465 5117 flags.go:64] FLAG: --read-only-port="10255" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374473 5117 flags.go:64] FLAG: --register-node="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374485 5117 flags.go:64] FLAG: --register-schedulable="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374493 5117 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374508 5117 flags.go:64] FLAG: --registry-burst="10" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374517 5117 flags.go:64] FLAG: --registry-qps="5" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374526 5117 flags.go:64] FLAG: --reserved-cpus="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374536 5117 flags.go:64] FLAG: --reserved-memory="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374546 5117 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374554 5117 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374562 5117 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374570 5117 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374578 5117 flags.go:64] FLAG: --runonce="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374586 5117 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374594 5117 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374602 5117 flags.go:64] FLAG: --seccomp-default="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374610 5117 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374618 5117 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374626 5117 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374637 5117 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374646 5117 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374654 5117 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374663 5117 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374671 5117 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374679 5117 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374713 5117 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374722 5117 flags.go:64] FLAG: --system-cgroups="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374730 5117 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374744 5117 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374751 5117 flags.go:64] FLAG: --tls-cert-file="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374760 5117 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374774 5117 flags.go:64] FLAG: --tls-min-version="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374784 5117 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374795 5117 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374807 5117 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374817 5117 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374827 5117 flags.go:64] FLAG: --v="2" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374839 5117 flags.go:64] FLAG: --version="false" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374850 5117 flags.go:64] FLAG: --vmodule="" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374860 5117 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.374869 5117 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375084 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375094 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375101 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375111 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375123 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375132 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375141 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375149 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375158 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375167 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375176 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375187 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375196 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375205 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375214 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375223 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375232 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375241 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375249 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375256 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375265 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375274 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375283 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375293 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375303 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375312 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375322 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375331 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375340 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375350 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375359 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375366 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375373 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375380 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375388 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375395 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375402 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375409 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375416 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375423 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375431 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375438 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375448 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375458 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375467 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375475 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375482 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375490 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375497 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375504 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375512 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375519 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375526 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375533 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375541 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375549 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375557 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375564 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375571 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375578 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375585 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375593 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375600 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375607 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375614 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375622 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375629 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375639 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375648 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375655 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375663 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375671 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375678 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375686 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375725 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375733 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375741 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375751 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375790 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375877 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375889 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375899 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375908 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375938 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375947 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.375976 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.383929 5117 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.405879 5117 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.405941 5117 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406051 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406066 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406075 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406083 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406092 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406099 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406107 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406114 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406122 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406129 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406136 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406144 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406152 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406159 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406166 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406174 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406181 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406189 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406197 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406204 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406211 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406218 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406228 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406242 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406251 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406260 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406268 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406276 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406285 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406294 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406301 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406309 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406316 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406323 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406331 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406338 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406345 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406352 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406359 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406367 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406375 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406382 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406390 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406398 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406406 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406413 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406420 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406428 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406435 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406442 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406450 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406457 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406465 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406473 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406479 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406488 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406495 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406502 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406509 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406517 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406524 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406532 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406541 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406551 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406558 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406566 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406573 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406580 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406587 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406595 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406603 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406610 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406617 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406624 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406631 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406638 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406646 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406654 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406662 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406671 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406678 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406709 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406717 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406724 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406733 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.406743 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.406760 5117 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407000 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407017 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407028 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407037 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407047 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407056 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407063 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407070 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407079 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407089 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407101 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407111 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407122 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407132 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407142 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407151 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407160 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407169 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407178 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407188 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407195 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407203 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407215 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407225 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407233 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407243 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407253 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407263 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407273 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407286 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407296 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407306 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407315 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407324 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407333 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407346 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407357 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407367 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407376 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407386 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407395 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407404 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407413 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407425 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407435 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407444 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407454 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407464 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407473 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407483 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407492 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407501 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407510 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407519 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407528 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407539 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407549 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407558 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407569 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407579 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407588 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407599 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407608 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407617 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407626 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407635 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407642 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407649 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407656 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407663 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407670 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407677 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407737 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407746 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407753 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407760 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407805 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407814 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407821 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407828 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407835 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407843 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407850 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407857 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407864 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:38 crc kubenswrapper[5117]: W0130 00:10:38.407871 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.407884 5117 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.415990 5117 server.go:962] "Client rotation is on, will bootstrap in background" Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.484456 5117 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.489317 5117 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.489489 5117 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.491234 5117 server.go:1019] "Starting client certificate rotation" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.491441 5117 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.491523 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.537494 5117 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.540742 5117 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.557814 5117 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.592877 5117 log.go:25] "Validated CRI v1 runtime API" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.794060 5117 log.go:25] "Validated CRI v1 image API" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.796985 5117 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.807046 5117 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-30-00-03-45-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.807080 5117 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.833566 5117 manager.go:217] Machine: {Timestamp:2026-01-30 00:10:38.830629663 +0000 UTC m=+1.942165643 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649934336 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf BootID:c59efde3-3a5f-43f0-8174-2d1f7716f844 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824967168 Type:vfs Inodes:4107658 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729990144 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107658 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f4:34:66 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f4:34:66 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7a:cf:88 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:48:cf:92 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:69:3e:bb Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d8:3a:b1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:88:82:73:64:eb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:c4:53:6b:88:75 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649934336 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.834025 5117 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.834309 5117 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.836130 5117 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.836193 5117 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.836455 5117 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.836470 5117 container_manager_linux.go:306] "Creating device plugin manager" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.836502 5117 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.836535 5117 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.836934 5117 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.837154 5117 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.840815 5117 kubelet.go:491] "Attempting to sync node with API server" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.840842 5117 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.841814 5117 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.841844 5117 kubelet.go:397] "Adding apiserver pod source" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.841871 5117 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.847256 5117 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.847283 5117 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.848589 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.848797 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.848867 5117 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.848881 5117 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.854851 5117 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.855314 5117 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.856276 5117 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857569 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857610 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857624 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857637 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857652 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857667 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857681 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857720 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857741 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857764 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.857804 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.858314 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.862028 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.862066 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.863852 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.909746 5117 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.909839 5117 server.go:1295] "Started kubelet" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.910063 5117 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.910056 5117 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.910219 5117 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 30 00:10:38 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.917753 5117 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.918183 5117 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.918238 5117 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.919301 5117 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.919339 5117 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.919362 5117 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.920381 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.921213 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.921120 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.923062 5117 server.go:317] "Adding debug handlers to kubelet server" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.923404 5117 factory.go:153] Registering CRI-O factory Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.923472 5117 factory.go:223] Registration of the crio container factory successfully Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.927971 5117 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.928002 5117 factory.go:55] Registering systemd factory Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.928013 5117 factory.go:223] Registration of the systemd container factory successfully Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.928037 5117 factory.go:103] Registering Raw factory Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.928053 5117 manager.go:1196] Started watching for new ooms in manager Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.928866 5117 manager.go:319] Starting recovery of all containers Jan 30 00:10:38 crc kubenswrapper[5117]: E0130 00:10:38.923370 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f59bb25ea9b33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.909782835 +0000 UTC m=+2.021318755,LastTimestamp:2026-01-30 00:10:38.909782835 +0000 UTC m=+2.021318755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.969126 5117 manager.go:324] Recovery completed Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.992801 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.995965 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.996048 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.996074 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.997048 5117 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.997071 5117 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 30 00:10:38 crc kubenswrapper[5117]: I0130 00:10:38.997096 5117 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.007006 5117 policy_none.go:49] "None policy: Start" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.007046 5117 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.007072 5117 state_mem.go:35] "Initializing new in-memory state store" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.020402 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022120 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022217 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022287 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022360 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.020791 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022441 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022555 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022615 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022672 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022774 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022837 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022891 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.022961 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023043 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023119 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023186 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023243 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023312 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023368 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023445 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023516 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023574 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023630 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023743 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023845 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023908 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.023965 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.024024 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.024107 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.024194 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.024270 5117 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.024277 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.024465 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.026595 5117 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027538 5117 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027634 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027666 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027715 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027737 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027760 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027782 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027803 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027854 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027876 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027897 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027919 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027940 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027962 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.027984 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028015 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028037 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028057 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028081 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028103 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028126 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028149 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028172 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028193 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028217 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028260 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028280 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028301 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028323 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028348 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028370 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028393 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028414 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028434 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028455 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028581 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028610 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028634 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028654 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028678 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028820 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028841 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028864 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028890 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028911 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028932 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028953 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028975 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.028997 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029018 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029038 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029062 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029085 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029108 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029129 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029152 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029174 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029197 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029220 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029244 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029265 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029299 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029323 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029346 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029367 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029387 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029410 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029430 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029450 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029471 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029505 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029560 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029591 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029617 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029646 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029667 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029722 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029748 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029771 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029795 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029817 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029865 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029887 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029910 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029936 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029958 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.029978 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030000 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030022 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030044 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030063 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030083 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030106 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030129 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030151 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030173 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030195 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030216 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030237 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030258 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030278 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030299 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030321 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030404 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030424 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030445 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030466 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030492 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030520 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030549 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030575 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030613 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030635 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030657 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030677 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030725 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030747 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030771 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030792 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030814 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030836 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030859 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030881 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030904 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030926 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030949 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030972 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.030994 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031017 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031040 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031064 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031085 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031106 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031128 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031149 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031214 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031236 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031260 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031280 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031300 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031319 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031341 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031361 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031388 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031409 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031431 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031454 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031479 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031727 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031759 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031788 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.031845 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033130 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033233 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033263 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033291 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033317 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033349 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033384 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033409 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033452 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033477 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033502 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033530 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033561 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033729 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033763 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033788 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033816 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033844 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033869 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033899 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033923 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033948 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033971 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.033998 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034022 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034052 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034085 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034112 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034138 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034163 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034189 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034655 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034734 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034759 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034787 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034816 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034842 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034864 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034924 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034952 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.034974 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035052 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035075 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035097 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035119 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035142 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035164 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035191 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035215 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035238 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035261 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035282 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035303 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035325 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035346 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035372 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035393 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035416 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035443 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035464 5117 reconstruct.go:97] "Volume reconstruction finished" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035480 5117 reconciler.go:26] "Reconciler: start to sync state" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035796 5117 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035911 5117 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.035932 5117 kubelet.go:2451] "Starting kubelet main sync loop" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.036179 5117 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.036628 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.061675 5117 manager.go:341] "Starting Device Plugin manager" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.062059 5117 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.062082 5117 server.go:85] "Starting device plugin registration server" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.062627 5117 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.062647 5117 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.062980 5117 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.063152 5117 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.063174 5117 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.067407 5117 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.067469 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.122741 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.136754 5117 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.137010 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.137784 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.137845 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.137867 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.139078 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.139415 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.139520 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.139757 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.139835 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.139868 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.140378 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.140428 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.140443 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.140730 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.140870 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.140930 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.141779 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.141818 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.141828 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.143040 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.143363 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.143401 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.143986 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.144387 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.144472 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.144990 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.145035 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.145057 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.145255 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.145280 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.145296 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.147178 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.147511 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.147649 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.149377 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.149438 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.149461 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.149461 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.149563 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.150834 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.151644 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.151761 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.152826 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.152881 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.152904 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: W0130 00:10:39.154210 5117 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/cpu.weight": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/cpu.weight: no such device Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.163221 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.164123 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.164194 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.164209 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.164246 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.165018 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.194126 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.204789 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.229267 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.238837 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.238964 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239105 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239178 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239483 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239563 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239636 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239676 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239741 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239745 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239776 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239906 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.239942 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240007 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240048 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240117 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240175 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240210 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240283 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240321 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240383 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240509 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240918 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.240926 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.241196 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.241489 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.241512 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.241735 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.242135 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.242933 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.252125 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.257990 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.342993 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343078 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343121 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343152 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343184 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343212 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343250 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343265 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343320 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343375 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343381 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343166 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343336 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343445 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343426 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343278 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343510 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343552 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343592 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343661 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343666 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343729 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343764 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343796 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343855 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343880 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343964 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.344041 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.344101 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.344149 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.343764 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.344216 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.365140 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.367221 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.367445 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.367591 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.367923 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.368891 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.495382 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.506000 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.528973 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.530117 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.553113 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.559126 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:39 crc kubenswrapper[5117]: W0130 00:10:39.564053 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-5b391dcc7eb71d6977d91f754ea7fafdbeb9a0c329c92ec9959d5ec5c91de2ad WatchSource:0}: Error finding container 5b391dcc7eb71d6977d91f754ea7fafdbeb9a0c329c92ec9959d5ec5c91de2ad: Status 404 returned error can't find the container with id 5b391dcc7eb71d6977d91f754ea7fafdbeb9a0c329c92ec9959d5ec5c91de2ad Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.570357 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:10:39 crc kubenswrapper[5117]: W0130 00:10:39.572544 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-177232ed6c1b110f0fb70675cc571cd7db2ddbf791f9b0cb0ee5c91cf19a4e69 WatchSource:0}: Error finding container 177232ed6c1b110f0fb70675cc571cd7db2ddbf791f9b0cb0ee5c91cf19a4e69: Status 404 returned error can't find the container with id 177232ed6c1b110f0fb70675cc571cd7db2ddbf791f9b0cb0ee5c91cf19a4e69 Jan 30 00:10:39 crc kubenswrapper[5117]: W0130 00:10:39.580926 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-98349ec8b44c06712b8b715def8713c38125d9d7ef4291c3cd2581633f1d0498 WatchSource:0}: Error finding container 98349ec8b44c06712b8b715def8713c38125d9d7ef4291c3cd2581633f1d0498: Status 404 returned error can't find the container with id 98349ec8b44c06712b8b715def8713c38125d9d7ef4291c3cd2581633f1d0498 Jan 30 00:10:39 crc kubenswrapper[5117]: W0130 00:10:39.605856 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-cdfbdc27a6c61b3a2ba0eb7621326a8a925b9bc44c5ebf22c57530b7405c7a23 WatchSource:0}: Error finding container cdfbdc27a6c61b3a2ba0eb7621326a8a925b9bc44c5ebf22c57530b7405c7a23: Status 404 returned error can't find the container with id cdfbdc27a6c61b3a2ba0eb7621326a8a925b9bc44c5ebf22c57530b7405c7a23 Jan 30 00:10:39 crc kubenswrapper[5117]: W0130 00:10:39.608915 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-b896b1eef700ddc3de5f8a2ef46248683b0f92f11e282b4b103bd84e023a083f WatchSource:0}: Error finding container b896b1eef700ddc3de5f8a2ef46248683b0f92f11e282b4b103bd84e023a083f: Status 404 returned error can't find the container with id b896b1eef700ddc3de5f8a2ef46248683b0f92f11e282b4b103bd84e023a083f Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.769659 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.772139 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.772225 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.772247 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.772290 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.773380 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Jan 30 00:10:39 crc kubenswrapper[5117]: E0130 00:10:39.807003 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:39 crc kubenswrapper[5117]: I0130 00:10:39.865952 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.042769 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b896b1eef700ddc3de5f8a2ef46248683b0f92f11e282b4b103bd84e023a083f"} Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.044201 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"cdfbdc27a6c61b3a2ba0eb7621326a8a925b9bc44c5ebf22c57530b7405c7a23"} Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.045749 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"98349ec8b44c06712b8b715def8713c38125d9d7ef4291c3cd2581633f1d0498"} Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.047251 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"177232ed6c1b110f0fb70675cc571cd7db2ddbf791f9b0cb0ee5c91cf19a4e69"} Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.049008 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"5b391dcc7eb71d6977d91f754ea7fafdbeb9a0c329c92ec9959d5ec5c91de2ad"} Jan 30 00:10:40 crc kubenswrapper[5117]: E0130 00:10:40.065098 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:40 crc kubenswrapper[5117]: E0130 00:10:40.132966 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:40 crc kubenswrapper[5117]: E0130 00:10:40.330779 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Jan 30 00:10:40 crc kubenswrapper[5117]: E0130 00:10:40.565444 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.573847 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.575293 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.575340 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.575354 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.575382 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:40 crc kubenswrapper[5117]: E0130 00:10:40.576009 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.750481 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:40 crc kubenswrapper[5117]: E0130 00:10:40.752217 5117 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:10:40 crc kubenswrapper[5117]: I0130 00:10:40.864764 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.054959 5117 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727" exitCode=0 Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.055142 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727"} Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.055225 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.056334 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.056417 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.056440 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:41 crc kubenswrapper[5117]: E0130 00:10:41.056859 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.058006 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"593e17d2b7b52cdae7ea597a23e84ff0bf2aa60c375f9aca06dcd08c9e3f62e4"} Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.058076 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6"} Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.060496 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152" exitCode=0 Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.060614 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152"} Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.060719 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.061904 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.061962 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.062038 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:41 crc kubenswrapper[5117]: E0130 00:10:41.062420 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.063375 5117 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3" exitCode=0 Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.063500 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3"} Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.063537 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.064647 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.064725 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.064747 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.065062 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:41 crc kubenswrapper[5117]: E0130 00:10:41.065153 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.065746 5117 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df" exitCode=0 Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.065792 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df"} Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.065928 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.066181 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.066236 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.066256 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.066767 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.066807 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.066829 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:41 crc kubenswrapper[5117]: E0130 00:10:41.067121 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:41 crc kubenswrapper[5117]: E0130 00:10:41.738260 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:41 crc kubenswrapper[5117]: E0130 00:10:41.807840 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:41 crc kubenswrapper[5117]: I0130 00:10:41.865303 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Jan 30 00:10:41 crc kubenswrapper[5117]: E0130 00:10:41.932144 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.080493 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.080563 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.080584 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.084402 5117 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27" exitCode=0 Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.084529 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.084660 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.086040 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.086092 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.086103 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:42 crc kubenswrapper[5117]: E0130 00:10:42.086377 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.091190 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"6f145d8fb662efd4297227d05be0be66559525a069a56f8766ddf99188e96072"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.091289 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.093404 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.093458 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.093472 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:42 crc kubenswrapper[5117]: E0130 00:10:42.093766 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.094667 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"4df65a3ddf5bacacb01f75935c3483e4e65c115d77a32405d17da0426f4989e4"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.094741 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"0c3ab95093b37cc80e5bd368dd2136ddd5b4f4f24601b417cc1a9d1105b99471"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.094763 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"7316893852c737b0d9ba4d82f95e30368750d3de645e594c803519f4536f5aec"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.095875 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.096658 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.096711 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.096728 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:42 crc kubenswrapper[5117]: E0130 00:10:42.096940 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.097602 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"c69ec53206c2bd047ddabdee78ed4f580ff7c5dab223808d8d5f78ea3efadbd0"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.097643 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b224c5fd1d4850a504ea24d2a7a69f9bc69c770196bb142ca72970d03830cb31"} Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.097825 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.098293 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.098314 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.098327 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:42 crc kubenswrapper[5117]: E0130 00:10:42.098485 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:42 crc kubenswrapper[5117]: E0130 00:10:42.108364 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f59bb25ea9b33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.909782835 +0000 UTC m=+2.021318755,LastTimestamp:2026-01-30 00:10:38.909782835 +0000 UTC m=+2.021318755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.181819 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.182842 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.182875 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.182884 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.182907 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:42 crc kubenswrapper[5117]: E0130 00:10:42.183230 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Jan 30 00:10:42 crc kubenswrapper[5117]: E0130 00:10:42.383613 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:42 crc kubenswrapper[5117]: I0130 00:10:42.865333 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.105814 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e784993bfb919779c8346dfe5f6c6f56b45695a37ec41ac18609f05cfa64f56a"} Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.105898 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356"} Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.106026 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.106941 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.107016 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.107037 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5117]: E0130 00:10:43.107462 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.108866 5117 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c" exitCode=0 Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.108968 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c"} Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.109168 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.109501 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.109541 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.110119 5117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.110181 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111059 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111115 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111137 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111132 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111249 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111301 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111323 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111300 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111365 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111444 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111476 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.111494 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5117]: E0130 00:10:43.111723 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5117]: E0130 00:10:43.112272 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5117]: E0130 00:10:43.112394 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5117]: E0130 00:10:43.113251 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5117]: I0130 00:10:43.268867 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.113826 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.114523 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"aec9c97c3cc2d8213a5562ed88f952b05cf8c3d680a573498ad7b11259cf9a89"} Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.114550 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"821751418b1c5520e37391e8725d8ce1d3b5e1a6c4904587df7e9523af49ec05"} Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.114561 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"cb9268a4c90e72b2cc87518edaf2e2d38186097e11994c07eef72b31deaf5f7d"} Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.114652 5117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.114672 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.115155 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.115176 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.115184 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:44 crc kubenswrapper[5117]: E0130 00:10:44.115445 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.115883 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.115906 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.115915 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:44 crc kubenswrapper[5117]: E0130 00:10:44.116124 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.299488 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.311146 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:44 crc kubenswrapper[5117]: I0130 00:10:44.961049 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.124427 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.125028 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.125057 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"36ca5cc06dc5d68e32e4afff843811d1c9a18c194cd728caf0b991d8afe748e4"} Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.126121 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"465448a262b54efe8e7d250fdbc015c4980c5fe972cce80cc5b93ac3b5fbb74a"} Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.126443 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.126523 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.126554 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.126962 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.127137 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.127272 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:45 crc kubenswrapper[5117]: E0130 00:10:45.127998 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:45 crc kubenswrapper[5117]: E0130 00:10:45.127160 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.197505 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.197908 5117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.197979 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.199435 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.199484 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.199498 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:45 crc kubenswrapper[5117]: E0130 00:10:45.199897 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.383562 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.385427 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.385657 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.385858 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:45 crc kubenswrapper[5117]: I0130 00:10:45.386007 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.033933 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.034824 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.036293 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.036374 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.036400 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:46 crc kubenswrapper[5117]: E0130 00:10:46.037085 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.128453 5117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.128541 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.129308 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.129417 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.129480 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.129495 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:46 crc kubenswrapper[5117]: E0130 00:10:46.130126 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.130273 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.130328 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.130349 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:46 crc kubenswrapper[5117]: E0130 00:10:46.131149 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:46 crc kubenswrapper[5117]: I0130 00:10:46.694452 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.093002 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.093790 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.095277 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.095349 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.095374 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:47 crc kubenswrapper[5117]: E0130 00:10:47.096096 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.131525 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.133319 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.133405 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.133422 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:47 crc kubenswrapper[5117]: E0130 00:10:47.134023 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.240041 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.240419 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.241839 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.241913 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.241942 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:47 crc kubenswrapper[5117]: E0130 00:10:47.242680 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.445127 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.445573 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.446842 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.446914 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.446936 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:47 crc kubenswrapper[5117]: E0130 00:10:47.447555 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:47 crc kubenswrapper[5117]: I0130 00:10:47.795597 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 00:10:48 crc kubenswrapper[5117]: I0130 00:10:48.134219 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:48 crc kubenswrapper[5117]: I0130 00:10:48.135537 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:48 crc kubenswrapper[5117]: I0130 00:10:48.135599 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:48 crc kubenswrapper[5117]: I0130 00:10:48.135619 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:48 crc kubenswrapper[5117]: E0130 00:10:48.136414 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:49 crc kubenswrapper[5117]: E0130 00:10:49.067824 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:51 crc kubenswrapper[5117]: I0130 00:10:51.799680 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:51 crc kubenswrapper[5117]: I0130 00:10:51.800080 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:51 crc kubenswrapper[5117]: I0130 00:10:51.801532 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:51 crc kubenswrapper[5117]: I0130 00:10:51.801600 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:51 crc kubenswrapper[5117]: I0130 00:10:51.801621 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:51 crc kubenswrapper[5117]: E0130 00:10:51.802245 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:51 crc kubenswrapper[5117]: I0130 00:10:51.808725 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:52 crc kubenswrapper[5117]: I0130 00:10:52.147679 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:52 crc kubenswrapper[5117]: I0130 00:10:52.148553 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:52 crc kubenswrapper[5117]: I0130 00:10:52.148613 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:52 crc kubenswrapper[5117]: I0130 00:10:52.148632 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:52 crc kubenswrapper[5117]: E0130 00:10:52.149108 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:53 crc kubenswrapper[5117]: I0130 00:10:53.323630 5117 trace.go:236] Trace[813062358]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:43.321) (total time: 10002ms): Jan 30 00:10:53 crc kubenswrapper[5117]: Trace[813062358]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:10:53.323) Jan 30 00:10:53 crc kubenswrapper[5117]: Trace[813062358]: [10.002036247s] [10.002036247s] END Jan 30 00:10:53 crc kubenswrapper[5117]: E0130 00:10:53.323678 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:53 crc kubenswrapper[5117]: I0130 00:10:53.866949 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 00:10:54 crc kubenswrapper[5117]: I0130 00:10:54.210833 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:54 crc kubenswrapper[5117]: I0130 00:10:54.210958 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:10:54 crc kubenswrapper[5117]: I0130 00:10:54.220912 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:54 crc kubenswrapper[5117]: I0130 00:10:54.220987 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:10:54 crc kubenswrapper[5117]: I0130 00:10:54.800113 5117 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 30 00:10:54 crc kubenswrapper[5117]: I0130 00:10:54.800291 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 30 00:10:55 crc kubenswrapper[5117]: E0130 00:10:55.134087 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 00:10:55 crc kubenswrapper[5117]: I0130 00:10:55.210992 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]log ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]etcd ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-informers ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/crd-informer-synced ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 00:10:55 crc kubenswrapper[5117]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 00:10:55 crc kubenswrapper[5117]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/bootstrap-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/apiservice-registration-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]autoregister-completion ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 00:10:55 crc kubenswrapper[5117]: livez check failed Jan 30 00:10:55 crc kubenswrapper[5117]: I0130 00:10:55.211442 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:10:57 crc kubenswrapper[5117]: I0130 00:10:57.851432 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 00:10:57 crc kubenswrapper[5117]: I0130 00:10:57.852561 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:57 crc kubenswrapper[5117]: I0130 00:10:57.853806 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:57 crc kubenswrapper[5117]: I0130 00:10:57.853847 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:57 crc kubenswrapper[5117]: I0130 00:10:57.853859 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:57 crc kubenswrapper[5117]: E0130 00:10:57.854268 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:57 crc kubenswrapper[5117]: I0130 00:10:57.873476 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 00:10:58 crc kubenswrapper[5117]: I0130 00:10:58.166205 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:58 crc kubenswrapper[5117]: I0130 00:10:58.167965 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:58 crc kubenswrapper[5117]: I0130 00:10:58.168053 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:58 crc kubenswrapper[5117]: I0130 00:10:58.168107 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:58 crc kubenswrapper[5117]: E0130 00:10:58.168992 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:58 crc kubenswrapper[5117]: E0130 00:10:58.192021 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.069326 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:59 crc kubenswrapper[5117]: I0130 00:10:59.220844 5117 trace.go:236] Trace[1962369498]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:47.184) (total time: 12036ms): Jan 30 00:10:59 crc kubenswrapper[5117]: Trace[1962369498]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 12036ms (00:10:59.220) Jan 30 00:10:59 crc kubenswrapper[5117]: Trace[1962369498]: [12.036534705s] [12.036534705s] END Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.220916 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:59 crc kubenswrapper[5117]: I0130 00:10:59.221056 5117 trace.go:236] Trace[891228543]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:47.767) (total time: 11453ms): Jan 30 00:10:59 crc kubenswrapper[5117]: Trace[891228543]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 11453ms (00:10:59.221) Jan 30 00:10:59 crc kubenswrapper[5117]: Trace[891228543]: [11.453526152s] [11.453526152s] END Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.221088 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.220981 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb25ea9b33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.909782835 +0000 UTC m=+2.021318755,LastTimestamp:2026-01-30 00:10:38.909782835 +0000 UTC m=+2.021318755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: I0130 00:10:59.222148 5117 trace.go:236] Trace[1977777845]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:47.091) (total time: 12130ms): Jan 30 00:10:59 crc kubenswrapper[5117]: Trace[1977777845]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 12130ms (00:10:59.221) Jan 30 00:10:59 crc kubenswrapper[5117]: Trace[1977777845]: [12.130489233s] [12.130489233s] END Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.222253 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.222447 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.223410 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.231772 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.240651 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f7bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,LastTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.248998 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2f1b3982 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.063964034 +0000 UTC m=+2.175499914,LastTimestamp:2026-01-30 00:10:39.063964034 +0000 UTC m=+2.175499914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.258335 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0e6a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:39.13781868 +0000 UTC m=+2.249354600,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.266350 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f239c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:39.137856631 +0000 UTC m=+2.249392551,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.274500 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f7bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f7bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,LastTimestamp:2026-01-30 00:10:39.137877122 +0000 UTC m=+2.249413052,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.282180 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0e6a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:39.139800305 +0000 UTC m=+2.251336195,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.289725 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f239c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:39.139844936 +0000 UTC m=+2.251380826,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.297787 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f7bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f7bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,LastTimestamp:2026-01-30 00:10:39.139873667 +0000 UTC m=+2.251409557,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:59 crc kubenswrapper[5117]: E0130 00:10:59.307811 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0e6a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:39.140403811 +0000 UTC m=+2.251939721,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.005847 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f239c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:39.140436222 +0000 UTC m=+2.251972132,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.005927 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.009492 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f7bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f7bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,LastTimestamp:2026-01-30 00:10:39.140449812 +0000 UTC m=+2.251985712,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.010929 5117 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.015725 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0e6a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:39.141797139 +0000 UTC m=+2.253333029,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.032000 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f239c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:39.14182292 +0000 UTC m=+2.253358810,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.052049 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f7bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f7bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,LastTimestamp:2026-01-30 00:10:39.14183223 +0000 UTC m=+2.253368120,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.056424 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.056518 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.074447 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0e6a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:39.143085065 +0000 UTC m=+2.254620995,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.080074 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f239c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:39.143382253 +0000 UTC m=+2.254918183,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.088415 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f7bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f7bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,LastTimestamp:2026-01-30 00:10:39.143412764 +0000 UTC m=+2.254948694,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.094007 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0e6a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:39.145009348 +0000 UTC m=+2.256545278,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.098593 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f239c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:39.145046039 +0000 UTC m=+2.256581969,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.103730 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f7bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f7bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996085692 +0000 UTC m=+2.107621622,LastTimestamp:2026-01-30 00:10:39.145067259 +0000 UTC m=+2.256603189,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.108911 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0e6a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0e6a68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.99601572 +0000 UTC m=+2.107551650,LastTimestamp:2026-01-30 00:10:39.145272015 +0000 UTC m=+2.256807905,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.111377 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59bb2b0f239c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59bb2b0f239c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:38.996063132 +0000 UTC m=+2.107599062,LastTimestamp:2026-01-30 00:10:39.145285865 +0000 UTC m=+2.256821755,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.113844 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59bb4d604771 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.571806065 +0000 UTC m=+2.683341985,LastTimestamp:2026-01-30 00:10:39.571806065 +0000 UTC m=+2.683341985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.116465 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bb4dc575b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.578437046 +0000 UTC m=+2.689972976,LastTimestamp:2026-01-30 00:10:39.578437046 +0000 UTC m=+2.689972976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.118464 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bb4e4f13fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.587455994 +0000 UTC m=+2.698991924,LastTimestamp:2026-01-30 00:10:39.587455994 +0000 UTC m=+2.698991924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.121569 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb4f9a0b61 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.609146209 +0000 UTC m=+2.720682139,LastTimestamp:2026-01-30 00:10:39.609146209 +0000 UTC m=+2.720682139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.127310 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bb4ffe799d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:39.615728029 +0000 UTC m=+2.727263949,LastTimestamp:2026-01-30 00:10:39.615728029 +0000 UTC m=+2.727263949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.132724 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bb823bed3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.458616122 +0000 UTC m=+3.570152012,LastTimestamp:2026-01-30 00:10:40.458616122 +0000 UTC m=+3.570152012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.137437 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb823e3f4e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.458768206 +0000 UTC m=+3.570304136,LastTimestamp:2026-01-30 00:10:40.458768206 +0000 UTC m=+3.570304136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.142860 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bb82507301 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.459961089 +0000 UTC m=+3.571496989,LastTimestamp:2026-01-30 00:10:40.459961089 +0000 UTC m=+3.571496989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.149783 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bb82503f52 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.459947858 +0000 UTC m=+3.571483748,LastTimestamp:2026-01-30 00:10:40.459947858 +0000 UTC m=+3.571483748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.155054 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59bb82505398 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.459953048 +0000 UTC m=+3.571488958,LastTimestamp:2026-01-30 00:10:40.459953048 +0000 UTC m=+3.571488958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.159519 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bb8326e440 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.474014784 +0000 UTC m=+3.585550684,LastTimestamp:2026-01-30 00:10:40.474014784 +0000 UTC m=+3.585550684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.165305 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bb833e3944 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.475543876 +0000 UTC m=+3.587079786,LastTimestamp:2026-01-30 00:10:40.475543876 +0000 UTC m=+3.587079786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.169269 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bb835b5260 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.477450848 +0000 UTC m=+3.588986748,LastTimestamp:2026-01-30 00:10:40.477450848 +0000 UTC m=+3.588986748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.174281 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb835ede41 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.477683265 +0000 UTC m=+3.589219155,LastTimestamp:2026-01-30 00:10:40.477683265 +0000 UTC m=+3.589219155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.178423 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb8370cdc9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.478858697 +0000 UTC m=+3.590394597,LastTimestamp:2026-01-30 00:10:40.478858697 +0000 UTC m=+3.590394597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.182645 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59bb8372c49c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.47898742 +0000 UTC m=+3.590523310,LastTimestamp:2026-01-30 00:10:40.47898742 +0000 UTC m=+3.590523310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.187548 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb97d7eaef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.821160687 +0000 UTC m=+3.932696607,LastTimestamp:2026-01-30 00:10:40.821160687 +0000 UTC m=+3.932696607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.191455 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb98b413c7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.835589063 +0000 UTC m=+3.947124983,LastTimestamp:2026-01-30 00:10:40.835589063 +0000 UTC m=+3.947124983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.195823 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bb98caa50b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:40.837068043 +0000 UTC m=+3.948603943,LastTimestamp:2026-01-30 00:10:40.837068043 +0000 UTC m=+3.948603943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.201726 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.201890 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bba5fbd8eb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.058396395 +0000 UTC m=+4.169932335,LastTimestamp:2026-01-30 00:10:41.058396395 +0000 UTC m=+4.169932335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.201996 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.202967 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.203005 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.203018 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.203038 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.203158 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.203406 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.212208 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bba65b1988 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.064638856 +0000 UTC m=+4.176174786,LastTimestamp:2026-01-30 00:10:41.064638856 +0000 UTC m=+4.176174786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.213565 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.217517 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59bba699934a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.068733258 +0000 UTC m=+4.180269148,LastTimestamp:2026-01-30 00:10:41.068733258 +0000 UTC m=+4.180269148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.224789 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bba6b22761 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.070344033 +0000 UTC m=+4.181879963,LastTimestamp:2026-01-30 00:10:41.070344033 +0000 UTC m=+4.181879963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.234191 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59bbb8951dff openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.370430975 +0000 UTC m=+4.481966865,LastTimestamp:2026-01-30 00:10:41.370430975 +0000 UTC m=+4.481966865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.238922 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbb89b2939 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.370827065 +0000 UTC m=+4.482362965,LastTimestamp:2026-01-30 00:10:41.370827065 +0000 UTC m=+4.482362965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.243643 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bbb99f917a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.387893114 +0000 UTC m=+4.499429004,LastTimestamp:2026-01-30 00:10:41.387893114 +0000 UTC m=+4.499429004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.252142 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbba068259 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.394639449 +0000 UTC m=+4.506175339,LastTimestamp:2026-01-30 00:10:41.394639449 +0000 UTC m=+4.506175339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.257105 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbba0d6431 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.395090481 +0000 UTC m=+4.506626371,LastTimestamp:2026-01-30 00:10:41.395090481 +0000 UTC m=+4.506626371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.265030 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59bbba165245 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.395675717 +0000 UTC m=+4.507211607,LastTimestamp:2026-01-30 00:10:41.395675717 +0000 UTC m=+4.507211607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.276369 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbba1af5c3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.395979715 +0000 UTC m=+4.507515615,LastTimestamp:2026-01-30 00:10:41.395979715 +0000 UTC m=+4.507515615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.285081 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbbc7556cc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.435457228 +0000 UTC m=+4.546993118,LastTimestamp:2026-01-30 00:10:41.435457228 +0000 UTC m=+4.546993118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.296059 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbbc895d30 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.436769584 +0000 UTC m=+4.548305474,LastTimestamp:2026-01-30 00:10:41.436769584 +0000 UTC m=+4.548305474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.304736 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bbbd0cf16f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.445392751 +0000 UTC m=+4.556928641,LastTimestamp:2026-01-30 00:10:41.445392751 +0000 UTC m=+4.556928641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.312249 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbc7e56334 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.627349812 +0000 UTC m=+4.738885702,LastTimestamp:2026-01-30 00:10:41.627349812 +0000 UTC m=+4.738885702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.319521 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bbc881484f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.637566543 +0000 UTC m=+4.749102433,LastTimestamp:2026-01-30 00:10:41.637566543 +0000 UTC m=+4.749102433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.321622 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbc93e9909 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.649973513 +0000 UTC m=+4.761509403,LastTimestamp:2026-01-30 00:10:41.649973513 +0000 UTC m=+4.761509403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.326405 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbc9559a9c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.651481244 +0000 UTC m=+4.763017154,LastTimestamp:2026-01-30 00:10:41.651481244 +0000 UTC m=+4.763017154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.328761 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bbca83303f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.671245887 +0000 UTC m=+4.782781787,LastTimestamp:2026-01-30 00:10:41.671245887 +0000 UTC m=+4.782781787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.335826 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bbcaa0517b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.673154939 +0000 UTC m=+4.784690839,LastTimestamp:2026-01-30 00:10:41.673154939 +0000 UTC m=+4.784690839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.344101 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbce7d8d2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.737985327 +0000 UTC m=+4.849521227,LastTimestamp:2026-01-30 00:10:41.737985327 +0000 UTC m=+4.849521227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.349511 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbd00becc4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.764093124 +0000 UTC m=+4.875629024,LastTimestamp:2026-01-30 00:10:41.764093124 +0000 UTC m=+4.875629024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.358321 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbd01f59b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.765366198 +0000 UTC m=+4.876902098,LastTimestamp:2026-01-30 00:10:41.765366198 +0000 UTC m=+4.876902098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.366328 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbd8b7b722 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.909569314 +0000 UTC m=+5.021105204,LastTimestamp:2026-01-30 00:10:41.909569314 +0000 UTC m=+5.021105204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.372307 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bbd9f6e1c0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.930486208 +0000 UTC m=+5.042022098,LastTimestamp:2026-01-30 00:10:41.930486208 +0000 UTC m=+5.042022098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.377339 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59bbda4f3365 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.936274277 +0000 UTC m=+5.047810167,LastTimestamp:2026-01-30 00:10:41.936274277 +0000 UTC m=+5.047810167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.383978 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bbdb326cc3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:41.951165635 +0000 UTC m=+5.062701525,LastTimestamp:2026-01-30 00:10:41.951165635 +0000 UTC m=+5.062701525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.388487 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbdf91a9c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.024516038 +0000 UTC m=+5.136051928,LastTimestamp:2026-01-30 00:10:42.024516038 +0000 UTC m=+5.136051928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.392849 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbe093f65c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.041443932 +0000 UTC m=+5.152979822,LastTimestamp:2026-01-30 00:10:42.041443932 +0000 UTC m=+5.152979822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.397098 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbe0a7f900 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.042755328 +0000 UTC m=+5.154291218,LastTimestamp:2026-01-30 00:10:42.042755328 +0000 UTC m=+5.154291218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.405908 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bbe3600729 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.088372009 +0000 UTC m=+5.199907939,LastTimestamp:2026-01-30 00:10:42.088372009 +0000 UTC m=+5.199907939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.411330 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbf1995215 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.327007765 +0000 UTC m=+5.438543645,LastTimestamp:2026-01-30 00:10:42.327007765 +0000 UTC m=+5.438543645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.416746 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbf2b393d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.345505753 +0000 UTC m=+5.457041643,LastTimestamp:2026-01-30 00:10:42.345505753 +0000 UTC m=+5.457041643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.421505 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbf2c71800 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.346784768 +0000 UTC m=+5.458320668,LastTimestamp:2026-01-30 00:10:42.346784768 +0000 UTC m=+5.458320668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.426794 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bbf2d40dbf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.347634111 +0000 UTC m=+5.459170001,LastTimestamp:2026-01-30 00:10:42.347634111 +0000 UTC m=+5.459170001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.431163 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bbf4ca5e60 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.380553824 +0000 UTC m=+5.492089714,LastTimestamp:2026-01-30 00:10:42.380553824 +0000 UTC m=+5.492089714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.437772 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bc03bde7a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.631395235 +0000 UTC m=+5.742931165,LastTimestamp:2026-01-30 00:10:42.631395235 +0000 UTC m=+5.742931165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.443551 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bc05042afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.652777212 +0000 UTC m=+5.764313142,LastTimestamp:2026-01-30 00:10:42.652777212 +0000 UTC m=+5.764313142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.457216 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc208e1bcd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.114802125 +0000 UTC m=+6.226338055,LastTimestamp:2026-01-30 00:10:43.114802125 +0000 UTC m=+6.226338055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.463344 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc32ea24c9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.422823625 +0000 UTC m=+6.534359525,LastTimestamp:2026-01-30 00:10:43.422823625 +0000 UTC m=+6.534359525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.468992 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc33befa76 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.436771958 +0000 UTC m=+6.548307858,LastTimestamp:2026-01-30 00:10:43.436771958 +0000 UTC m=+6.548307858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.476680 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc33d62175 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.438289269 +0000 UTC m=+6.549825179,LastTimestamp:2026-01-30 00:10:43.438289269 +0000 UTC m=+6.549825179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.483872 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc438a5175 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.701756277 +0000 UTC m=+6.813292167,LastTimestamp:2026-01-30 00:10:43.701756277 +0000 UTC m=+6.813292167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.491934 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc4455157b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.715044731 +0000 UTC m=+6.826580631,LastTimestamp:2026-01-30 00:10:43.715044731 +0000 UTC m=+6.826580631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.501595 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc446c2bff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.716557823 +0000 UTC m=+6.828093713,LastTimestamp:2026-01-30 00:10:43.716557823 +0000 UTC m=+6.828093713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.512526 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc546f73bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.985208252 +0000 UTC m=+7.096744142,LastTimestamp:2026-01-30 00:10:43.985208252 +0000 UTC m=+7.096744142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.517470 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc554cf939 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:43.999725881 +0000 UTC m=+7.111261771,LastTimestamp:2026-01-30 00:10:43.999725881 +0000 UTC m=+7.111261771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.525677 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc55622785 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:44.001113989 +0000 UTC m=+7.112649879,LastTimestamp:2026-01-30 00:10:44.001113989 +0000 UTC m=+7.112649879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.533591 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc6672616e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:44.287390062 +0000 UTC m=+7.398925952,LastTimestamp:2026-01-30 00:10:44.287390062 +0000 UTC m=+7.398925952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.539981 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc67fc99c7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:44.313225671 +0000 UTC m=+7.424761591,LastTimestamp:2026-01-30 00:10:44.313225671 +0000 UTC m=+7.424761591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.545559 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc681f770f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:44.315510543 +0000 UTC m=+7.427046463,LastTimestamp:2026-01-30 00:10:44.315510543 +0000 UTC m=+7.427046463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.551469 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc7c206b8a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:44.65111745 +0000 UTC m=+7.762653380,LastTimestamp:2026-01-30 00:10:44.65111745 +0000 UTC m=+7.762653380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.560601 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59bc7d900acb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:44.675209931 +0000 UTC m=+7.786745851,LastTimestamp:2026-01-30 00:10:44.675209931 +0000 UTC m=+7.786745851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.569636 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:11:00 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188f59beb5ef6e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:11:00 crc kubenswrapper[5117]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:11:00 crc kubenswrapper[5117]: Jan 30 00:11:00 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:54.210919989 +0000 UTC m=+17.322455879,LastTimestamp:2026-01-30 00:10:54.210919989 +0000 UTC m=+17.322455879,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:11:00 crc kubenswrapper[5117]: > Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.573986 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59beb5f0ca2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:54.211009071 +0000 UTC m=+17.322544961,LastTimestamp:2026-01-30 00:10:54.211009071 +0000 UTC m=+17.322544961,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.577678 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59beb5ef6e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:11:00 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188f59beb5ef6e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:11:00 crc kubenswrapper[5117]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:11:00 crc kubenswrapper[5117]: Jan 30 00:11:00 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:54.210919989 +0000 UTC m=+17.322455879,LastTimestamp:2026-01-30 00:10:54.220955011 +0000 UTC m=+17.332490901,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:11:00 crc kubenswrapper[5117]: > Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.582302 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59beb5f0ca2f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59beb5f0ca2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:54.211009071 +0000 UTC m=+17.322544961,LastTimestamp:2026-01-30 00:10:54.221012413 +0000 UTC m=+17.332548303,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.586487 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 30 00:11:00 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-controller-manager-crc.188f59bed90f4220 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 30 00:11:00 crc kubenswrapper[5117]: body: Jan 30 00:11:00 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:54.800208416 +0000 UTC m=+17.911744336,LastTimestamp:2026-01-30 00:10:54.800208416 +0000 UTC m=+17.911744336,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:11:00 crc kubenswrapper[5117]: > Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.592454 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59bed911a3de openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:54.80036451 +0000 UTC m=+17.911900440,LastTimestamp:2026-01-30 00:10:54.80036451 +0000 UTC m=+17.911900440,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.596895 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:11:00 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188f59bef18f7d2a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Jan 30 00:11:00 crc kubenswrapper[5117]: body: [+]ping ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]log ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]etcd ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-informers ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/crd-informer-synced ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 00:11:00 crc kubenswrapper[5117]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 00:11:00 crc kubenswrapper[5117]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/bootstrap-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/apiservice-registration-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]autoregister-completion ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 00:11:00 crc kubenswrapper[5117]: livez check failed Jan 30 00:11:00 crc kubenswrapper[5117]: Jan 30 00:11:00 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:55.211265322 +0000 UTC m=+18.322801212,LastTimestamp:2026-01-30 00:10:55.211265322 +0000 UTC m=+18.322801212,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:11:00 crc kubenswrapper[5117]: > Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.601729 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bef192d548 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:55.211484488 +0000 UTC m=+18.323020378,LastTimestamp:2026-01-30 00:10:55.211484488 +0000 UTC m=+18.323020378,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.607540 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:11:00 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188f59c0125b62f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": EOF Jan 30 00:11:00 crc kubenswrapper[5117]: body: Jan 30 00:11:00 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:00.056466166 +0000 UTC m=+23.168002066,LastTimestamp:2026-01-30 00:11:00.056466166 +0000 UTC m=+23.168002066,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:11:00 crc kubenswrapper[5117]: > Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.613096 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0125ca276 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:00.056547958 +0000 UTC m=+23.168083858,LastTimestamp:2026-01-30 00:11:00.056547958 +0000 UTC m=+23.168083858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.620776 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:11:00 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188f59c01b189cd0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 30 00:11:00 crc kubenswrapper[5117]: body: Jan 30 00:11:00 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:00.203085008 +0000 UTC m=+23.314620938,LastTimestamp:2026-01-30 00:11:00.203085008 +0000 UTC m=+23.314620938,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:11:00 crc kubenswrapper[5117]: > Jan 30 00:11:00 crc kubenswrapper[5117]: E0130 00:11:00.626236 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c01b1a720b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:00.203205131 +0000 UTC m=+23.314741051,LastTimestamp:2026-01-30 00:11:00.203205131 +0000 UTC m=+23.314741051,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:00 crc kubenswrapper[5117]: I0130 00:11:00.873341 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.004125 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.006190 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e784993bfb919779c8346dfe5f6c6f56b45695a37ec41ac18609f05cfa64f56a" exitCode=255 Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.006250 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e784993bfb919779c8346dfe5f6c6f56b45695a37ec41ac18609f05cfa64f56a"} Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.006443 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.006952 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.006978 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.006988 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:01 crc kubenswrapper[5117]: E0130 00:11:01.007253 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.007504 5117 scope.go:117] "RemoveContainer" containerID="e784993bfb919779c8346dfe5f6c6f56b45695a37ec41ac18609f05cfa64f56a" Jan 30 00:11:01 crc kubenswrapper[5117]: E0130 00:11:01.025551 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bbf2c71800\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbf2c71800 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.346784768 +0000 UTC m=+5.458320668,LastTimestamp:2026-01-30 00:11:01.008977023 +0000 UTC m=+24.120512913,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.073632 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:01 crc kubenswrapper[5117]: E0130 00:11:01.317612 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bc03bde7a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bc03bde7a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.631395235 +0000 UTC m=+5.742931165,LastTimestamp:2026-01-30 00:11:01.311219511 +0000 UTC m=+24.422755401,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:01 crc kubenswrapper[5117]: E0130 00:11:01.330652 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bc05042afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bc05042afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.652777212 +0000 UTC m=+5.764313142,LastTimestamp:2026-01-30 00:11:01.325216974 +0000 UTC m=+24.436752864,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:01 crc kubenswrapper[5117]: E0130 00:11:01.540373 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.805213 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.805486 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.806674 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.806747 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.806761 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:01 crc kubenswrapper[5117]: E0130 00:11:01.807164 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.810108 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:11:01 crc kubenswrapper[5117]: I0130 00:11:01.871492 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.010551 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.012426 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a"} Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.012478 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.012498 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.013116 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.013145 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.013157 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.013226 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.013256 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.013268 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:02 crc kubenswrapper[5117]: E0130 00:11:02.013553 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:02 crc kubenswrapper[5117]: E0130 00:11:02.013831 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:02 crc kubenswrapper[5117]: I0130 00:11:02.880408 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.019306 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.020938 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.023856 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a" exitCode=255 Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.023959 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a"} Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.024022 5117 scope.go:117] "RemoveContainer" containerID="e784993bfb919779c8346dfe5f6c6f56b45695a37ec41ac18609f05cfa64f56a" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.024202 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.025152 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.025196 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.025212 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:03 crc kubenswrapper[5117]: E0130 00:11:03.025738 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.026150 5117 scope.go:117] "RemoveContainer" containerID="aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a" Jan 30 00:11:03 crc kubenswrapper[5117]: E0130 00:11:03.026593 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:03 crc kubenswrapper[5117]: E0130 00:11:03.040653 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0c3626b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,LastTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:03 crc kubenswrapper[5117]: I0130 00:11:03.872761 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:04 crc kubenswrapper[5117]: I0130 00:11:04.030062 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:11:04 crc kubenswrapper[5117]: I0130 00:11:04.033616 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:04 crc kubenswrapper[5117]: I0130 00:11:04.034641 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:04 crc kubenswrapper[5117]: I0130 00:11:04.034765 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:04 crc kubenswrapper[5117]: I0130 00:11:04.034795 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:04 crc kubenswrapper[5117]: E0130 00:11:04.035445 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:04 crc kubenswrapper[5117]: I0130 00:11:04.035946 5117 scope.go:117] "RemoveContainer" containerID="aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a" Jan 30 00:11:04 crc kubenswrapper[5117]: E0130 00:11:04.036282 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:04 crc kubenswrapper[5117]: E0130 00:11:04.045770 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59c0c3626b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0c3626b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,LastTimestamp:2026-01-30 00:11:04.03622856 +0000 UTC m=+27.147764480,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:04 crc kubenswrapper[5117]: I0130 00:11:04.872258 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:05 crc kubenswrapper[5117]: I0130 00:11:05.623913 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:05 crc kubenswrapper[5117]: I0130 00:11:05.628479 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:05 crc kubenswrapper[5117]: I0130 00:11:05.628615 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:05 crc kubenswrapper[5117]: I0130 00:11:05.628642 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:05 crc kubenswrapper[5117]: I0130 00:11:05.628762 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:05 crc kubenswrapper[5117]: E0130 00:11:05.647573 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:05 crc kubenswrapper[5117]: E0130 00:11:05.805232 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:11:05 crc kubenswrapper[5117]: I0130 00:11:05.873301 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:05 crc kubenswrapper[5117]: E0130 00:11:05.888398 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:11:06 crc kubenswrapper[5117]: I0130 00:11:06.871151 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:07 crc kubenswrapper[5117]: E0130 00:11:07.705138 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:11:07 crc kubenswrapper[5117]: I0130 00:11:07.876106 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:08 crc kubenswrapper[5117]: E0130 00:11:08.550512 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:08 crc kubenswrapper[5117]: I0130 00:11:08.872023 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:09 crc kubenswrapper[5117]: E0130 00:11:09.070435 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:09 crc kubenswrapper[5117]: E0130 00:11:09.715220 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:11:09 crc kubenswrapper[5117]: I0130 00:11:09.874171 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:10 crc kubenswrapper[5117]: I0130 00:11:10.878943 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:11 crc kubenswrapper[5117]: I0130 00:11:11.072956 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:11 crc kubenswrapper[5117]: I0130 00:11:11.073342 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:11 crc kubenswrapper[5117]: I0130 00:11:11.074333 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:11 crc kubenswrapper[5117]: I0130 00:11:11.074487 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:11 crc kubenswrapper[5117]: I0130 00:11:11.074613 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:11 crc kubenswrapper[5117]: E0130 00:11:11.075236 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:11 crc kubenswrapper[5117]: I0130 00:11:11.076673 5117 scope.go:117] "RemoveContainer" containerID="aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a" Jan 30 00:11:11 crc kubenswrapper[5117]: E0130 00:11:11.077072 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:11 crc kubenswrapper[5117]: E0130 00:11:11.085817 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59c0c3626b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0c3626b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,LastTimestamp:2026-01-30 00:11:11.077023969 +0000 UTC m=+34.188559869,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:11 crc kubenswrapper[5117]: I0130 00:11:11.871469 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.013324 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.058109 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.059121 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.059415 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.059668 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:12 crc kubenswrapper[5117]: E0130 00:11:12.060529 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.061160 5117 scope.go:117] "RemoveContainer" containerID="aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a" Jan 30 00:11:12 crc kubenswrapper[5117]: E0130 00:11:12.061680 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:12 crc kubenswrapper[5117]: E0130 00:11:12.070250 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59c0c3626b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0c3626b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,LastTimestamp:2026-01-30 00:11:12.061614698 +0000 UTC m=+35.173150618,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.648440 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.650337 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.650572 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.650819 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.651089 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:12 crc kubenswrapper[5117]: E0130 00:11:12.667512 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:12 crc kubenswrapper[5117]: I0130 00:11:12.874092 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:13 crc kubenswrapper[5117]: I0130 00:11:13.873901 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:14 crc kubenswrapper[5117]: I0130 00:11:14.872966 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:15 crc kubenswrapper[5117]: E0130 00:11:15.558624 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:15 crc kubenswrapper[5117]: I0130 00:11:15.872377 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:16 crc kubenswrapper[5117]: I0130 00:11:16.872311 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:17 crc kubenswrapper[5117]: I0130 00:11:17.873452 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:18 crc kubenswrapper[5117]: I0130 00:11:18.870485 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:19 crc kubenswrapper[5117]: E0130 00:11:19.071565 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5117]: I0130 00:11:19.668927 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:19 crc kubenswrapper[5117]: I0130 00:11:19.670850 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5117]: I0130 00:11:19.670958 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5117]: I0130 00:11:19.670986 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5117]: I0130 00:11:19.671039 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:19 crc kubenswrapper[5117]: E0130 00:11:19.687221 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:19 crc kubenswrapper[5117]: I0130 00:11:19.873512 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:20 crc kubenswrapper[5117]: I0130 00:11:20.869947 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:21 crc kubenswrapper[5117]: I0130 00:11:21.869317 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:22 crc kubenswrapper[5117]: E0130 00:11:22.566573 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:22 crc kubenswrapper[5117]: I0130 00:11:22.870580 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:23 crc kubenswrapper[5117]: I0130 00:11:23.036668 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:23 crc kubenswrapper[5117]: I0130 00:11:23.037781 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:23 crc kubenswrapper[5117]: I0130 00:11:23.037828 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:23 crc kubenswrapper[5117]: I0130 00:11:23.037844 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:23 crc kubenswrapper[5117]: E0130 00:11:23.038342 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:23 crc kubenswrapper[5117]: I0130 00:11:23.038815 5117 scope.go:117] "RemoveContainer" containerID="aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a" Jan 30 00:11:23 crc kubenswrapper[5117]: E0130 00:11:23.052348 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bbf2c71800\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bbf2c71800 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.346784768 +0000 UTC m=+5.458320668,LastTimestamp:2026-01-30 00:11:23.040505578 +0000 UTC m=+46.152041468,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:23 crc kubenswrapper[5117]: E0130 00:11:23.349684 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bc03bde7a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bc03bde7a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.631395235 +0000 UTC m=+5.742931165,LastTimestamp:2026-01-30 00:11:23.344673999 +0000 UTC m=+46.456209899,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:23 crc kubenswrapper[5117]: E0130 00:11:23.503172 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59bc05042afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59bc05042afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:42.652777212 +0000 UTC m=+5.764313142,LastTimestamp:2026-01-30 00:11:23.494899373 +0000 UTC m=+46.606435273,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:23 crc kubenswrapper[5117]: E0130 00:11:23.602997 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:11:23 crc kubenswrapper[5117]: I0130 00:11:23.871170 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:24 crc kubenswrapper[5117]: I0130 00:11:24.099438 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:11:24 crc kubenswrapper[5117]: I0130 00:11:24.101187 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a"} Jan 30 00:11:24 crc kubenswrapper[5117]: I0130 00:11:24.101430 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:24 crc kubenswrapper[5117]: I0130 00:11:24.101996 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:24 crc kubenswrapper[5117]: I0130 00:11:24.102044 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:24 crc kubenswrapper[5117]: I0130 00:11:24.102058 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:24 crc kubenswrapper[5117]: E0130 00:11:24.102416 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:24 crc kubenswrapper[5117]: E0130 00:11:24.784966 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:11:24 crc kubenswrapper[5117]: I0130 00:11:24.870406 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.106525 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.107498 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.109756 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a" exitCode=255 Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.109814 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a"} Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.109883 5117 scope.go:117] "RemoveContainer" containerID="aa5466b64cfc2e073d21b21b94d078972553880c47f7b3b5494ecc91b631322a" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.110243 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.111085 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.111113 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.111122 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:25 crc kubenswrapper[5117]: E0130 00:11:25.111416 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.111645 5117 scope.go:117] "RemoveContainer" containerID="163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a" Jan 30 00:11:25 crc kubenswrapper[5117]: E0130 00:11:25.111936 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:25 crc kubenswrapper[5117]: E0130 00:11:25.119386 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59c0c3626b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0c3626b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,LastTimestamp:2026-01-30 00:11:25.111911312 +0000 UTC m=+48.223447202,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:25 crc kubenswrapper[5117]: I0130 00:11:25.871329 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.041606 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.041858 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.042884 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.042921 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.042933 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5117]: E0130 00:11:26.043271 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.115519 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.687462 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.688920 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.689004 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.689032 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.689078 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:26 crc kubenswrapper[5117]: E0130 00:11:26.706029 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:26 crc kubenswrapper[5117]: I0130 00:11:26.873361 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:27 crc kubenswrapper[5117]: E0130 00:11:27.095007 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:11:27 crc kubenswrapper[5117]: I0130 00:11:27.872157 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:28 crc kubenswrapper[5117]: I0130 00:11:28.871794 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:28 crc kubenswrapper[5117]: E0130 00:11:28.871843 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:11:29 crc kubenswrapper[5117]: E0130 00:11:29.073094 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:29 crc kubenswrapper[5117]: E0130 00:11:29.575181 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:29 crc kubenswrapper[5117]: I0130 00:11:29.872515 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:30 crc kubenswrapper[5117]: I0130 00:11:30.873485 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:31 crc kubenswrapper[5117]: I0130 00:11:31.073444 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:31 crc kubenswrapper[5117]: I0130 00:11:31.073907 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:31 crc kubenswrapper[5117]: I0130 00:11:31.075241 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5117]: I0130 00:11:31.075312 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5117]: I0130 00:11:31.075334 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5117]: E0130 00:11:31.075956 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:31 crc kubenswrapper[5117]: I0130 00:11:31.076786 5117 scope.go:117] "RemoveContainer" containerID="163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a" Jan 30 00:11:31 crc kubenswrapper[5117]: E0130 00:11:31.077261 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:31 crc kubenswrapper[5117]: E0130 00:11:31.086569 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59c0c3626b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0c3626b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,LastTimestamp:2026-01-30 00:11:31.077188714 +0000 UTC m=+54.188724614,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:31 crc kubenswrapper[5117]: I0130 00:11:31.873360 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:32 crc kubenswrapper[5117]: I0130 00:11:32.872048 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:33 crc kubenswrapper[5117]: I0130 00:11:33.707153 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:33 crc kubenswrapper[5117]: I0130 00:11:33.708890 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5117]: I0130 00:11:33.708955 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5117]: I0130 00:11:33.708967 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5117]: I0130 00:11:33.708996 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:33 crc kubenswrapper[5117]: E0130 00:11:33.722837 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:33 crc kubenswrapper[5117]: I0130 00:11:33.867304 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:34 crc kubenswrapper[5117]: I0130 00:11:34.101881 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:34 crc kubenswrapper[5117]: I0130 00:11:34.102235 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:34 crc kubenswrapper[5117]: I0130 00:11:34.103564 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5117]: I0130 00:11:34.103626 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5117]: I0130 00:11:34.103646 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5117]: E0130 00:11:34.104361 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:34 crc kubenswrapper[5117]: I0130 00:11:34.104790 5117 scope.go:117] "RemoveContainer" containerID="163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a" Jan 30 00:11:34 crc kubenswrapper[5117]: E0130 00:11:34.105051 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:34 crc kubenswrapper[5117]: E0130 00:11:34.111422 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59c0c3626b73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59c0c3626b73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:11:03.026494323 +0000 UTC m=+26.138030223,LastTimestamp:2026-01-30 00:11:34.105012875 +0000 UTC m=+57.216548785,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:34 crc kubenswrapper[5117]: I0130 00:11:34.871539 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:35 crc kubenswrapper[5117]: I0130 00:11:35.872371 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:36 crc kubenswrapper[5117]: E0130 00:11:36.583011 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:36 crc kubenswrapper[5117]: I0130 00:11:36.870859 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:37 crc kubenswrapper[5117]: I0130 00:11:37.870155 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:38 crc kubenswrapper[5117]: I0130 00:11:38.873206 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:39 crc kubenswrapper[5117]: E0130 00:11:39.073806 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:39 crc kubenswrapper[5117]: I0130 00:11:39.873183 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:40 crc kubenswrapper[5117]: I0130 00:11:40.723861 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:40 crc kubenswrapper[5117]: I0130 00:11:40.725734 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5117]: I0130 00:11:40.725952 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5117]: I0130 00:11:40.726105 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5117]: I0130 00:11:40.726265 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:40 crc kubenswrapper[5117]: E0130 00:11:40.737378 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:40 crc kubenswrapper[5117]: I0130 00:11:40.873025 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:41 crc kubenswrapper[5117]: I0130 00:11:41.870420 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:42 crc kubenswrapper[5117]: I0130 00:11:42.869758 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:43 crc kubenswrapper[5117]: E0130 00:11:43.591071 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:43 crc kubenswrapper[5117]: I0130 00:11:43.870684 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:44 crc kubenswrapper[5117]: I0130 00:11:44.867811 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:45 crc kubenswrapper[5117]: I0130 00:11:45.839078 5117 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-trwqm" Jan 30 00:11:45 crc kubenswrapper[5117]: I0130 00:11:45.847892 5117 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-trwqm" Jan 30 00:11:45 crc kubenswrapper[5117]: I0130 00:11:45.933617 5117 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.036396 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.037615 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.037672 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.037703 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5117]: E0130 00:11:46.038285 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.038622 5117 scope.go:117] "RemoveContainer" containerID="163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.490380 5117 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.849159 5117 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-01 00:06:45 +0000 UTC" deadline="2026-02-23 08:56:09.356300811 +0000 UTC" Jan 30 00:11:46 crc kubenswrapper[5117]: I0130 00:11:46.849484 5117 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="584h44m22.506822055s" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.174674 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.177085 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948"} Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.177288 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.177884 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.177936 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.177953 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.178557 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.737986 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.739130 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.739193 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.739216 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.739409 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.754119 5117 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.754567 5117 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.754613 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.759637 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.759723 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.759743 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.759773 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.759793 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.779863 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.787436 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.787504 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.787525 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.787552 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.787574 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.802955 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.813231 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.813291 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.813307 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.813330 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.813345 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.826488 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.842926 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.842993 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.843010 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.843032 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5117]: I0130 00:11:47.843047 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.855223 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.855523 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.855591 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:47 crc kubenswrapper[5117]: E0130 00:11:47.956497 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.057078 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.157406 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.181441 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.181986 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.183962 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" exitCode=255 Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.184047 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948"} Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.184135 5117 scope.go:117] "RemoveContainer" containerID="163b13c88fc5a21068a999d42cd4e4e03f61a441866cc73d9e479a9985e8812a" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.184490 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.185451 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.185486 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.185497 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.186097 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:48 crc kubenswrapper[5117]: I0130 00:11:48.186442 5117 scope.go:117] "RemoveContainer" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.186729 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.257808 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.358032 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.458352 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.559281 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.660226 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.761223 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.862000 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:48 crc kubenswrapper[5117]: E0130 00:11:48.962569 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.063483 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.074849 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.164264 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: I0130 00:11:49.196363 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.265340 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.366487 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.467358 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.567779 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.668732 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.769865 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.870288 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:49 crc kubenswrapper[5117]: E0130 00:11:49.971409 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.072570 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.173196 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.273682 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.374332 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.475487 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.575778 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.676725 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.776817 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.877240 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:50 crc kubenswrapper[5117]: E0130 00:11:50.978297 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: I0130 00:11:51.073094 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:51 crc kubenswrapper[5117]: I0130 00:11:51.073449 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:51 crc kubenswrapper[5117]: I0130 00:11:51.074751 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:51 crc kubenswrapper[5117]: I0130 00:11:51.074864 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:51 crc kubenswrapper[5117]: I0130 00:11:51.074894 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.075727 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:51 crc kubenswrapper[5117]: I0130 00:11:51.076199 5117 scope.go:117] "RemoveContainer" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.076590 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.078756 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.179662 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.279989 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.380351 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.481376 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.581746 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.682806 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.783848 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.884817 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:51 crc kubenswrapper[5117]: E0130 00:11:51.985918 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.086728 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.187770 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.311189 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.412316 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.513122 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.613565 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.713911 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.814898 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:52 crc kubenswrapper[5117]: E0130 00:11:52.915932 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.016751 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.116852 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.217382 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.317527 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.418275 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.518999 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.619830 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.720887 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.822050 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:53 crc kubenswrapper[5117]: E0130 00:11:53.924899 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.025992 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.126802 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.227861 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.329000 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.429365 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.529906 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.630685 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.731289 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.831951 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:54 crc kubenswrapper[5117]: E0130 00:11:54.932862 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.033424 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.134648 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.235073 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.335888 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.436200 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.536957 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.637624 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.737829 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.838820 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:55 crc kubenswrapper[5117]: E0130 00:11:55.939871 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.040054 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.140284 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.240415 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.341250 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.441978 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.542388 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.642568 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.743775 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.844567 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:56 crc kubenswrapper[5117]: E0130 00:11:56.945190 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.045581 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.146229 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: I0130 00:11:57.177893 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:57 crc kubenswrapper[5117]: I0130 00:11:57.178299 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:57 crc kubenswrapper[5117]: I0130 00:11:57.179855 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:57 crc kubenswrapper[5117]: I0130 00:11:57.179915 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:57 crc kubenswrapper[5117]: I0130 00:11:57.179941 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.180851 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:57 crc kubenswrapper[5117]: I0130 00:11:57.181276 5117 scope.go:117] "RemoveContainer" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.181713 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.246971 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.347727 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.447932 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.548092 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.649318 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.750781 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.851363 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:57 crc kubenswrapper[5117]: E0130 00:11:57.952614 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.053059 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.061413 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.066877 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.066945 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.066965 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.066994 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.067015 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.082815 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.095680 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.095786 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.095815 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.095847 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.095870 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.113675 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.125288 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.125354 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.125371 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.125394 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.125410 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.140339 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.151737 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.151827 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.151846 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.151877 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:58 crc kubenswrapper[5117]: I0130 00:11:58.151896 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:58Z","lastTransitionTime":"2026-01-30T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.171091 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c59efde3-3a5f-43f0-8174-2d1f7716f844\\\",\\\"systemUUID\\\":\\\"25eba5d0-e5a8-4791-9aa1-0b4d29f1cacf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.171302 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.171342 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.271435 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.372188 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.472624 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.573523 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.674667 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.775640 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.876518 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:58 crc kubenswrapper[5117]: E0130 00:11:58.976817 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.075209 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.077294 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.177779 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.278529 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.378873 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.479090 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.579989 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.681249 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.782436 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.882826 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:59 crc kubenswrapper[5117]: E0130 00:11:59.983061 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.084007 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.184941 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.285124 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.385574 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.485932 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.586087 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.687279 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.788496 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.888664 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:00 crc kubenswrapper[5117]: E0130 00:12:00.989112 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.090152 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.191237 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.291755 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.392380 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.492947 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.593605 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.694165 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.794439 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.895207 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:01 crc kubenswrapper[5117]: E0130 00:12:01.996223 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.097101 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.197181 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: I0130 00:12:02.197373 5117 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.298077 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.398558 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.499430 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.600542 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.701760 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.802794 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:02 crc kubenswrapper[5117]: E0130 00:12:02.903178 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:03 crc kubenswrapper[5117]: E0130 00:12:03.004373 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:03 crc kubenswrapper[5117]: E0130 00:12:03.104785 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:03 crc kubenswrapper[5117]: E0130 00:12:03.205124 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.214395 5117 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.219045 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.236169 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.307219 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.307291 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.307305 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.307326 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.307339 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:03Z","lastTransitionTime":"2026-01-30T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.335342 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.410097 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.410204 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.410242 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.410284 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.410308 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:03Z","lastTransitionTime":"2026-01-30T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.435109 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.513111 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.513180 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.513194 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.513214 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.513229 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:03Z","lastTransitionTime":"2026-01-30T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.536975 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.615606 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.615674 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.615721 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.615750 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.615766 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:03Z","lastTransitionTime":"2026-01-30T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.718626 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.718735 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.718757 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.718785 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.718806 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:03Z","lastTransitionTime":"2026-01-30T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.820980 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.821046 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.821064 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.821088 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.821105 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:03Z","lastTransitionTime":"2026-01-30T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.924450 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.924513 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.924531 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.924558 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:03 crc kubenswrapper[5117]: I0130 00:12:03.924575 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:03Z","lastTransitionTime":"2026-01-30T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.027229 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.027281 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.027296 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.027315 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.027327 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.035495 5117 apiserver.go:52] "Watching apiserver" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.054875 5117 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.056000 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg","openshift-image-registry/node-ca-5m2xx","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-cdnjt","openshift-multus/multus-sdjgw","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-dns/node-resolver-drphl","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/multus-additional-cni-plugins-cgq54","openshift-etcd/etcd-crc","openshift-machine-config-operator/machine-config-daemon-z8qm4","openshift-multus/network-metrics-daemon-q7tcw"] Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.057649 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.060294 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.060392 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.060600 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.061059 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.061245 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.062200 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.062942 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.065209 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.065435 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.065732 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.067222 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.067547 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.067592 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.067932 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.068036 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.068151 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.068191 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.069750 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.070583 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.070783 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.075048 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.076870 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.090059 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.105750 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m2xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff05938-ab46-4a8d-ba5d-d583eac37163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lb62d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m2xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.123555 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.129683 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.129778 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.129795 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.129818 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.129834 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.137048 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.149450 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.160886 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.160972 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ff05938-ab46-4a8d-ba5d-d583eac37163-host\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161036 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3965caad-c581-45b3-88e0-99b4039659c5-rootfs\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161066 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161089 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161107 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ff05938-ab46-4a8d-ba5d-d583eac37163-serviceca\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161124 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb62d\" (UniqueName: \"kubernetes.io/projected/6ff05938-ab46-4a8d-ba5d-d583eac37163-kube-api-access-lb62d\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161145 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55tjv\" (UniqueName: \"kubernetes.io/projected/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-kube-api-access-55tjv\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.161286 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161377 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7tj8\" (UniqueName: \"kubernetes.io/projected/3965caad-c581-45b3-88e0-99b4039659c5-kube-api-access-r7tj8\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161437 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.161528 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.661433652 +0000 UTC m=+87.772969582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161633 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161743 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161821 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3965caad-c581-45b3-88e0-99b4039659c5-mcd-auth-proxy-config\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.161881 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-tmp-dir\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.161889 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.161982 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.661958857 +0000 UTC m=+87.773494747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.162033 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3965caad-c581-45b3-88e0-99b4039659c5-proxy-tls\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.164477 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.164723 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.165574 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.165624 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-hosts-file\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.165675 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.165741 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.165773 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.165814 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.168582 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.170242 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.185750 5117 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.185834 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.185940 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.186824 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.190110 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.190223 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.190679 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.191572 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.196443 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.196986 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.197472 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197529 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197571 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197579 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197599 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197602 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197612 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197736 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.697712436 +0000 UTC m=+87.809248336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.197798 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.697770037 +0000 UTC m=+87.809306027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.199205 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.201099 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.205942 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.212256 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.222576 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.233286 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.233634 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.236956 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.237030 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.237048 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.237077 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.237096 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.238506 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.238654 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.239055 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.239039 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.239120 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.239413 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.239630 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.239667 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.239681 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.240085 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.240141 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.240202 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.240284 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.250189 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.260357 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-drphl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55tjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-drphl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269206 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7tj8\" (UniqueName: \"kubernetes.io/projected/3965caad-c581-45b3-88e0-99b4039659c5-kube-api-access-r7tj8\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269279 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-ovn-kubernetes\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269314 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269349 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3965caad-c581-45b3-88e0-99b4039659c5-proxy-tls\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269375 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-log-socket\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269398 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-script-lib\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269440 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ff05938-ab46-4a8d-ba5d-d583eac37163-serviceca\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269467 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lb62d\" (UniqueName: \"kubernetes.io/projected/6ff05938-ab46-4a8d-ba5d-d583eac37163-kube-api-access-lb62d\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269490 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55tjv\" (UniqueName: \"kubernetes.io/projected/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-kube-api-access-55tjv\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269536 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269570 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-kubelet\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269593 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-bin\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269621 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3965caad-c581-45b3-88e0-99b4039659c5-rootfs\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269642 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-slash\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269678 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-netns\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269716 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-node-log\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269738 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvmf\" (UniqueName: \"kubernetes.io/projected/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-kube-api-access-rpvmf\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269765 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-var-lib-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269799 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3965caad-c581-45b3-88e0-99b4039659c5-mcd-auth-proxy-config\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269826 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-tmp-dir\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269852 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-systemd\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269876 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-config\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269903 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269929 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-hosts-file\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.269953 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-env-overrides\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270145 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-hosts-file\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270288 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3965caad-c581-45b3-88e0-99b4039659c5-rootfs\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270319 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-etc-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270375 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270429 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270515 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270646 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ff05938-ab46-4a8d-ba5d-d583eac37163-host\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270769 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-ovn\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270852 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ff05938-ab46-4a8d-ba5d-d583eac37163-host\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270952 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-netd\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270990 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-systemd-units\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.270957 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-tmp-dir\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.271028 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovn-node-metrics-cert\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.272960 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3965caad-c581-45b3-88e0-99b4039659c5-mcd-auth-proxy-config\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.274313 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ff05938-ab46-4a8d-ba5d-d583eac37163-serviceca\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.278518 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3965caad-c581-45b3-88e0-99b4039659c5-proxy-tls\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.280866 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.287806 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55tjv\" (UniqueName: \"kubernetes.io/projected/4ee02588-f6ac-4300-9cbb-17e3a0b80e4a-kube-api-access-55tjv\") pod \"node-resolver-drphl\" (UID: \"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\") " pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.289117 5117 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.290489 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7tj8\" (UniqueName: \"kubernetes.io/projected/3965caad-c581-45b3-88e0-99b4039659c5-kube-api-access-r7tj8\") pod \"machine-config-daemon-z8qm4\" (UID: \"3965caad-c581-45b3-88e0-99b4039659c5\") " pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.292299 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb62d\" (UniqueName: \"kubernetes.io/projected/6ff05938-ab46-4a8d-ba5d-d583eac37163-kube-api-access-lb62d\") pod \"node-ca-5m2xx\" (UID: \"6ff05938-ab46-4a8d-ba5d-d583eac37163\") " pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.295776 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.299006 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.301232 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.304295 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.304595 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.304889 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.305183 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.312805 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m2xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff05938-ab46-4a8d-ba5d-d583eac37163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lb62d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m2xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.314222 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.314613 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.316141 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.316635 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.317917 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.318008 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.318120 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.325258 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.326744 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.327192 5117 scope.go:117] "RemoveContainer" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.327548 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.328948 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.339015 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20183f5d-15d9-4a2e-afab-ba81d49aae6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6f145d8fb662efd4297227d05be0be66559525a069a56f8766ddf99188e96072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.339254 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.339488 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.339504 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.339523 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.339538 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.352030 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa751395-67ab-4cce-8dbb-9f2ba6c32b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://593e17d2b7b52cdae7ea597a23e84ff0bf2aa60c375f9aca06dcd08c9e3f62e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b224c5fd1d4850a504ea24d2a7a69f9bc69c770196bb142ca72970d03830cb31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c69ec53206c2bd047ddabdee78ed4f580ff7c5dab223808d8d5f78ea3efadbd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.364541 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371382 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-netns\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371424 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-node-log\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371449 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rpvmf\" (UniqueName: \"kubernetes.io/projected/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-kube-api-access-rpvmf\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371479 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-cnibin\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371559 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-netns\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371577 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-node-log\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371611 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-cni-bin\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371723 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-var-lib-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371751 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef32555a-37d0-4ff7-80d6-3d572916786f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371786 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371830 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-systemd\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371856 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-config\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371881 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-cni-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371907 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hklp5\" (UniqueName: \"kubernetes.io/projected/ef32555a-37d0-4ff7-80d6-3d572916786f-kube-api-access-hklp5\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371935 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cnibin\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371985 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-systemd\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372018 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-conf-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372045 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-env-overrides\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372073 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-etc-kubernetes\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372125 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-etc-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372158 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-cni-multus\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372193 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-multus-certs\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372226 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-ovn\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372251 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-socket-dir-parent\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372274 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-netns\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372320 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-netd\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372343 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cni-binary-copy\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372365 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrfpb\" (UniqueName: \"kubernetes.io/projected/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-kube-api-access-mrfpb\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372392 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372420 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-systemd-units\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372444 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovn-node-metrics-cert\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372468 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-os-release\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.372495 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.371987 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-var-lib-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373029 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-netd\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373086 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-ovn\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373384 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-systemd-units\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373720 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-ovn-kubernetes\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373804 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-config\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373827 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373861 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knhxg\" (UniqueName: \"kubernetes.io/projected/a09afae3-bd41-4f19-af49-34689367f229-kube-api-access-knhxg\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373920 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373918 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-env-overrides\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373934 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-ovn-kubernetes\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373970 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.373988 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-hostroot\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374014 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374030 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-log-socket\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374077 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-script-lib\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374114 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-system-cni-dir\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374170 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-log-socket\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374231 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-os-release\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374261 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-k8s-cni-cncf-io\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374306 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0ccdffb-2e23-428a-8423-b08f9d708b15-cni-binary-copy\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374375 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374416 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374445 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374478 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-kubelet\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374502 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-bin\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374528 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-kubelet\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374557 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-slash\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374566 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-kubelet\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374581 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-system-cni-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374565 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-sdjgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0ccdffb-2e23-428a-8423-b08f9d708b15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rprhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sdjgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374613 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-daemon-config\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374623 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-bin\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374639 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rprhg\" (UniqueName: \"kubernetes.io/projected/c0ccdffb-2e23-428a-8423-b08f9d708b15-kube-api-access-rprhg\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.374701 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-slash\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.375006 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-script-lib\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.375431 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-etc-openvswitch\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.377562 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovn-node-metrics-cert\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.385013 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.389262 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpvmf\" (UniqueName: \"kubernetes.io/projected/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-kube-api-access-rpvmf\") pod \"ovnkube-node-cdnjt\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.389593 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3965caad-c581-45b3-88e0-99b4039659c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z8qm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.394945 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.399896 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7401198e-bb3b-4751-8aa5-cd73dd7f11b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7316893852c737b0d9ba4d82f95e30368750d3de645e594c803519f4536f5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c3ab95093b37cc80e5bd368dd2136ddd5b4f4f24601b417cc1a9d1105b99471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4df65a3ddf5bacacb01f75935c3483e4e65c115d77a32405d17da0426f4989e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: W0130 00:12:04.412032 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-50629541c6ec6e98df927d794654d05d1dc0cfb0f80a4431879af4b0bc58581a WatchSource:0}: Error finding container 50629541c6ec6e98df927d794654d05d1dc0cfb0f80a4431879af4b0bc58581a: Status 404 returned error can't find the container with id 50629541c6ec6e98df927d794654d05d1dc0cfb0f80a4431879af4b0bc58581a Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.412151 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.421470 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q7tcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09afae3-bd41-4f19-af49-34689367f229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q7tcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.424154 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.424179 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5m2xx" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.424802 5117 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.442480 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.442631 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.442729 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.442819 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.442884 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.442608 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: W0130 00:12:04.447867 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-1edbb71af15ea653db183aeb60c6db8cbaffd56cb323de349a40aa3de84632c2 WatchSource:0}: Error finding container 1edbb71af15ea653db183aeb60c6db8cbaffd56cb323de349a40aa3de84632c2: Status 404 returned error can't find the container with id 1edbb71af15ea653db183aeb60c6db8cbaffd56cb323de349a40aa3de84632c2 Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.454565 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m2xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff05938-ab46-4a8d-ba5d-d583eac37163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lb62d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m2xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.473469 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cgq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.475991 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476073 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476110 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476268 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476301 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476334 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476365 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476397 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476427 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476476 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476509 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.476588 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.477490 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.477776 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478403 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478592 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478654 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478718 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478754 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478789 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478824 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478866 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478902 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478936 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478970 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479010 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479043 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479079 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479112 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479156 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479190 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479223 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479260 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479293 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479334 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479369 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479416 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479458 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479497 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478599 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.478758 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479474 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.479374 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480122 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480218 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480460 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480606 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480621 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480814 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480863 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480893 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480903 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.480921 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481087 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481089 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481151 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481191 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481259 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481291 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481319 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481349 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481374 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481409 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481436 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481515 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481543 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481567 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481592 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481617 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481641 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481664 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481713 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481717 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481741 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481770 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481798 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481822 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481851 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481876 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481777 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.481902 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.482118 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.482438 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.482617 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.482796 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.482989 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.482832 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.482657 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483031 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483067 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483112 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483140 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483164 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483212 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483238 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483346 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483364 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483400 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483446 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483465 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483486 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483586 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483609 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483629 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483671 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483739 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483883 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483908 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484023 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484048 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484069 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484108 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484131 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484231 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484254 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484275 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484632 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484657 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484675 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484821 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484841 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484862 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484979 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485000 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485018 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485137 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485161 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485178 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485213 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483499 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483578 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485474 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483665 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483778 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.483806 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484162 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484303 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.484488 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485197 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485433 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485502 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485535 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485721 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.485962 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486008 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486054 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486096 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486145 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486183 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486222 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486271 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486308 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486349 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486389 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486452 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486492 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486538 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486575 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486613 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486653 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486712 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486755 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486794 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486834 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486871 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486906 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486949 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.486989 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487037 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487074 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487112 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487150 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487192 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487240 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487279 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487316 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487352 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487388 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487429 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487477 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487515 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487552 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487594 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487633 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487672 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487728 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487765 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487828 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487866 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487912 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487952 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487989 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488050 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488089 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488124 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488160 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488200 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488240 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488285 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488324 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488369 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488411 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488451 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488490 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488530 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488570 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488616 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488673 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488737 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488788 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488831 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488872 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488915 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488951 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489170 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489208 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489243 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489284 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489320 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489357 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489398 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489440 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489479 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489520 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489560 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489598 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489638 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489754 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489804 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489848 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489895 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489935 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.489972 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.490015 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.490065 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.490109 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.490154 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487072 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.487242 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488094 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488612 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488640 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.488624 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.490070 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.496785 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.490313 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.990252832 +0000 UTC m=+88.101788722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.490990 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.491288 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.491738 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.491940 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.493058 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.493441 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.493474 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.493545 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.493648 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.495297 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.495594 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.496096 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.496322 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.496367 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.496496 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497131 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497167 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497209 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497238 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497270 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497275 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497296 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497325 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497357 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497383 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497419 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497424 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497449 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497479 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497506 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497537 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497578 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497604 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497632 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497659 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497891 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497921 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497991 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498017 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498043 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498066 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498131 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498267 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rprhg\" (UniqueName: \"kubernetes.io/projected/c0ccdffb-2e23-428a-8423-b08f9d708b15-kube-api-access-rprhg\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498347 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-cnibin\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498390 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-cni-bin\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498446 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef32555a-37d0-4ff7-80d6-3d572916786f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498471 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498532 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-cni-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498578 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hklp5\" (UniqueName: \"kubernetes.io/projected/ef32555a-37d0-4ff7-80d6-3d572916786f-kube-api-access-hklp5\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498601 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cnibin\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498622 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-conf-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498668 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-etc-kubernetes\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498732 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-cni-multus\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498756 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-multus-certs\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498818 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-socket-dir-parent\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-netns\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498894 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cni-binary-copy\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498917 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrfpb\" (UniqueName: \"kubernetes.io/projected/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-kube-api-access-mrfpb\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498960 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499040 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-os-release\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499068 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499405 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499470 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knhxg\" (UniqueName: \"kubernetes.io/projected/a09afae3-bd41-4f19-af49-34689367f229-kube-api-access-knhxg\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499521 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499552 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-hostroot\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499615 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-system-cni-dir\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499637 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-os-release\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499673 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-k8s-cni-cncf-io\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499735 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0ccdffb-2e23-428a-8423-b08f9d708b15-cni-binary-copy\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499797 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499830 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-kubelet\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499889 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-system-cni-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499910 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-daemon-config\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500035 5117 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500049 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500062 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500072 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500101 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500115 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500125 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500136 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500146 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500158 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500189 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500200 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500211 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500222 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500233 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500261 5117 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500275 5117 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500285 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500295 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500305 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500316 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500345 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500357 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500367 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500377 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500387 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500396 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500425 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500438 5117 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500449 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500458 5117 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500468 5117 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500477 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500507 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500518 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500536 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500546 5117 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500557 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500588 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500598 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500610 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500620 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500629 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500640 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500668 5117 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500678 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500714 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500726 5117 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500736 5117 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500746 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500756 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500766 5117 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500796 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500806 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500817 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500827 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500836 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500865 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500877 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500887 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500896 5117 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500908 5117 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500919 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500948 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.502134 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-daemon-config\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497834 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.497847 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498079 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498500 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498568 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498848 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.498949 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499068 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499454 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499820 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499935 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499948 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.499980 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500242 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500496 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500548 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500570 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.500830 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.501720 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.502731 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.502932 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.505908 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.505953 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.506205 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.506215 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.506544 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.506803 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507153 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507163 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507348 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507424 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507508 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507699 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507756 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.507890 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.508080 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.508353 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.508489 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.508506 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.508682 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.508894 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.508932 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509076 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509120 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509242 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509477 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509507 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509775 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509808 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509821 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509828 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.509946 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.510048 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.510197 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.510084 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58962898-db76-4092-9fd2-6ee041453295\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://821751418b1c5520e37391e8725d8ce1d3b5e1a6c4904587df7e9523af49ec05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://aec9c97c3cc2d8213a5562ed88f952b05cf8c3d680a573498ad7b11259cf9a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://465448a262b54efe8e7d250fdbc015c4980c5fe972cce80cc5b93ac3b5fbb74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ca5cc06dc5d68e32e4afff843811d1c9a18c194cd728caf0b991d8afe748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb9268a4c90e72b2cc87518edaf2e2d38186097e11994c07eef72b31deaf5f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.510257 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.510574 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.510715 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.510756 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-cnibin\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.510798 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-cni-bin\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.511173 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.511303 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.511555 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.511704 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512065 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512482 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512572 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512682 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512814 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-os-release\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512877 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512881 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-cni-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.512930 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-conf-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.513358 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.513495 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.513684 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.513998 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.514072 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.514287 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.514326 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-hostroot\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.514521 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.514907 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0ccdffb-2e23-428a-8423-b08f9d708b15-cni-binary-copy\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.514984 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.515131 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.515229 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.515257 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.516247 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs podName:a09afae3-bd41-4f19-af49-34689367f229 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.010663929 +0000 UTC m=+88.122199829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs") pod "network-metrics-daemon-q7tcw" (UID: "a09afae3-bd41-4f19-af49-34689367f229") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.516313 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-system-cni-dir\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.517213 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.517472 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.517782 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.520472 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.520669 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.520784 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.523170 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.523441 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.524053 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.524064 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.524113 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.524731 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.525020 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.525807 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.525957 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.526154 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.527136 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.527363 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528176 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cnibin\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528435 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528448 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528649 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528729 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528773 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528798 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528815 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528839 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528524 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-os-release\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528261 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-k8s-cni-cncf-io\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.528942 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-kubelet\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529139 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529271 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529329 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529338 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529461 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529597 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529617 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-multus-socket-dir-parent\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529651 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-etc-kubernetes\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529654 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529571 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529713 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-netns\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529722 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-run-multus-certs\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529714 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0ccdffb-2e23-428a-8423-b08f9d708b15-host-var-lib-cni-multus\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529790 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-system-cni-dir\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.529985 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.530112 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.530282 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.530348 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.530649 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-cni-binary-copy\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.530736 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.532180 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.533899 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.534120 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.534185 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrfpb\" (UniqueName: \"kubernetes.io/projected/b1924d1c-fa4c-4d24-8885-d545bbb1c47e-kube-api-access-mrfpb\") pod \"multus-additional-cni-plugins-cgq54\" (UID: \"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\") " pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.534239 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.534964 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hklp5\" (UniqueName: \"kubernetes.io/projected/ef32555a-37d0-4ff7-80d6-3d572916786f-kube-api-access-hklp5\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.539067 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"532769ff-9767-48cd-8c80-07c96da318f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:11:47.150117 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:47.150348 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:47.151748 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3419708539/tls.crt::/tmp/serving-cert-3419708539/tls.key\\\\\\\"\\\\nI0130 00:11:47.930472 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:47.932364 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:47.932382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:47.932413 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:47.932420 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:47.936849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 00:11:47.936865 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:47.936880 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:47.936897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:47.936900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:47.936903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:47.939460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.540301 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.540347 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.540533 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.540570 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.540587 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.540705 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.540858 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.541171 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.541746 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-drphl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.541828 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.542004 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.542455 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef32555a-37d0-4ff7-80d6-3d572916786f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-vlmjg\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.543365 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knhxg\" (UniqueName: \"kubernetes.io/projected/a09afae3-bd41-4f19-af49-34689367f229-kube-api-access-knhxg\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.545242 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.545284 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.545299 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.545318 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.545334 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.546683 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.546906 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547139 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547207 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547429 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547505 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547542 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547610 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547617 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.547989 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rprhg\" (UniqueName: \"kubernetes.io/projected/c0ccdffb-2e23-428a-8423-b08f9d708b15-kube-api-access-rprhg\") pod \"multus-sdjgw\" (UID: \"c0ccdffb-2e23-428a-8423-b08f9d708b15\") " pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.548965 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.549653 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.550075 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.551024 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.551455 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.551755 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.551896 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.553948 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.554084 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.554949 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.561208 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.561263 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.562750 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.563023 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.563090 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.563191 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.563400 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.563425 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.564902 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.565574 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.565943 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.568410 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.568870 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.569360 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.574092 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.582236 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.589435 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.594032 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-drphl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55tjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-drphl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601547 5117 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601572 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601582 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601591 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601601 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601612 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601622 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601632 5117 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601641 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601651 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601659 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601669 5117 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601678 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601700 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601710 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601720 5117 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601730 5117 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601739 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601747 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601756 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601764 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601773 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601782 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601791 5117 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601800 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601809 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601820 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601830 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601840 5117 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601850 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601859 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601868 5117 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601878 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601886 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601894 5117 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601903 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601912 5117 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601922 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601930 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601938 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601947 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601955 5117 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601964 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601971 5117 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601979 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601989 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.601997 5117 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602005 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602013 5117 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602024 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602034 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602046 5117 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602056 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602067 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602078 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602087 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602096 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602106 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602116 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602125 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602156 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602167 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602176 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602185 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602196 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602204 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602212 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602221 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602230 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602238 5117 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602247 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602255 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602263 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602274 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602282 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602290 5117 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602299 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602308 5117 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602317 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602327 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602335 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602343 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602351 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602366 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602375 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602383 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602392 5117 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602402 5117 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602412 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602421 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602429 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602438 5117 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602448 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602456 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602464 5117 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602473 5117 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602480 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602489 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602497 5117 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602505 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602513 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602522 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602530 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602538 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602546 5117 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602558 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602566 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602574 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602583 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602591 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602599 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602607 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602615 5117 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602625 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602634 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602642 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602651 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602658 5117 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602666 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602674 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602698 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602706 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602714 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602723 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602732 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602740 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602749 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602757 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602765 5117 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602774 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602784 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602795 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602805 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602814 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602822 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602830 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602839 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602849 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602859 5117 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602868 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602878 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602888 5117 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602895 5117 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602905 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602913 5117 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602922 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602930 5117 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602940 5117 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602948 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602958 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602968 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602977 5117 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602986 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.602995 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.603003 5117 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.605530 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.610803 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.611919 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.612093 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cdnjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: W0130 00:12:04.612247 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae50f46f_8c30_46ce_91a1_9e2ce73d4fe0.slice/crio-a79a67d4dd15044e0af5558f93d1a71d4610b083ab05d901ec4d411b19f5dea2 WatchSource:0}: Error finding container a79a67d4dd15044e0af5558f93d1a71d4610b083ab05d901ec4d411b19f5dea2: Status 404 returned error can't find the container with id a79a67d4dd15044e0af5558f93d1a71d4610b083ab05d901ec4d411b19f5dea2 Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.617133 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sdjgw" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.622762 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef32555a-37d0-4ff7-80d6-3d572916786f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-vlmjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:04 crc kubenswrapper[5117]: W0130 00:12:04.635059 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ccdffb_2e23_428a_8423_b08f9d708b15.slice/crio-4153a34fc70c84e28901c6d9a8c825025a078ae491f5251ba9eeb4aa7c025514 WatchSource:0}: Error finding container 4153a34fc70c84e28901c6d9a8c825025a078ae491f5251ba9eeb4aa7c025514: Status 404 returned error can't find the container with id 4153a34fc70c84e28901c6d9a8c825025a078ae491f5251ba9eeb4aa7c025514 Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.636014 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cgq54" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.645203 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.647062 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.647127 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.647139 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.647158 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.647169 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: W0130 00:12:04.647760 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1924d1c_fa4c_4d24_8885_d545bbb1c47e.slice/crio-28d4a04fcc5775314c16e4e41d5f3d39ce7f3433f98b544e829e80514ae5736f WatchSource:0}: Error finding container 28d4a04fcc5775314c16e4e41d5f3d39ce7f3433f98b544e829e80514ae5736f: Status 404 returned error can't find the container with id 28d4a04fcc5775314c16e4e41d5f3d39ce7f3433f98b544e829e80514ae5736f Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.704086 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.704167 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.704230 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.704256 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.704316 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.704329 5117 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.704341 5117 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.704462 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.704538 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.70451559 +0000 UTC m=+88.816051480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.704869 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.704890 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.704903 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.704940 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.704929201 +0000 UTC m=+88.816465091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.705042 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.705085 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.705074466 +0000 UTC m=+88.816610366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.705093 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.705137 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.705157 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: E0130 00:12:04.705255 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.70522048 +0000 UTC m=+88.816756420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.750367 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.750436 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.750461 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.750487 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.750499 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.852981 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.853048 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.853070 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.853096 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.853115 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.955954 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.956006 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.956025 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.956044 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:04 crc kubenswrapper[5117]: I0130 00:12:04.956056 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:04Z","lastTransitionTime":"2026-01-30T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.007542 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.007884 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.007825141 +0000 UTC m=+89.119361031 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.041069 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.041874 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.058292 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.058337 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.058348 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.058364 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.058374 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.061981 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.066514 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.074945 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.102676 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.104245 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.109177 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.109355 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.109430 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs podName:a09afae3-bd41-4f19-af49-34689367f229 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.109408507 +0000 UTC m=+89.220944397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs") pod "network-metrics-daemon-q7tcw" (UID: "a09afae3-bd41-4f19-af49-34689367f229") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.160487 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.160525 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.160535 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.160549 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.160561 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.176680 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.177574 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.180354 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.183559 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.204182 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.205126 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.244112 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.244542 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c" exitCode=0 Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.247559 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.262635 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.262838 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.263113 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.263298 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.263498 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.266358 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.267396 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.328609 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.348383 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.366259 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.366314 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.366331 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.366352 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.366371 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.439683 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.443213 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.449594 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.451229 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.454255 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.456575 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.458837 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.468474 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.468540 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.468557 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.468582 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.468599 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.474041 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.475207 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.480101 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.480643 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.484241 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.485523 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.510497 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.544964 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.546617 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.548789 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.552023 5117 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.552203 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.571243 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.571301 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.571314 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.571329 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.571339 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.578918 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.605414 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.607825 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.611613 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.612196 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.613816 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.614497 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.615000 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.627819 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.630489 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.637727 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.639746 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.655078 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.673640 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.673735 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.673761 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.673782 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.673795 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.677683 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.679389 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.682416 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.686488 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.688671 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.690396 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.702813 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705102 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerStarted","Data":"28d4a04fcc5775314c16e4e41d5f3d39ce7f3433f98b544e829e80514ae5736f"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705207 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705266 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"a79a67d4dd15044e0af5558f93d1a71d4610b083ab05d901ec4d411b19f5dea2"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705287 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5m2xx" event={"ID":"6ff05938-ab46-4a8d-ba5d-d583eac37163","Type":"ContainerStarted","Data":"da3577b168a7e9b883c0f39aaa8c2829941319ca9ff4ef35eee8ae3dfa338d16"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705307 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5m2xx" event={"ID":"6ff05938-ab46-4a8d-ba5d-d583eac37163","Type":"ContainerStarted","Data":"7404a0b6b101e28a0e064b8bc561b604e81cb64316403e56b4821246b787351e"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705325 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" event={"ID":"ef32555a-37d0-4ff7-80d6-3d572916786f","Type":"ContainerStarted","Data":"f27688a46cecb1d0f451bdf3dfd38c9159e089f29d78b4b499102d79d1d9e088"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705352 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-drphl" event={"ID":"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a","Type":"ContainerStarted","Data":"f0882411f3a83562f84a795fcea42b40579c915d890151da22a39da43e790764"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705373 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-drphl" event={"ID":"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a","Type":"ContainerStarted","Data":"e61041ffddd9b466e4ae8cb53edba995e2482614fda5f140b38e226f9fc409b7"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705390 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"1edbb71af15ea653db183aeb60c6db8cbaffd56cb323de349a40aa3de84632c2"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705407 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"0a536d4579e70b40253cab8993e7d9aab9bf9b603c56f468bccbd0b0c0104268"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705425 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"50629541c6ec6e98df927d794654d05d1dc0cfb0f80a4431879af4b0bc58581a"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705444 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"671fb92df0ecdc7e8c13a14b774ebb3aff352f651733cce2bf128ef55d52ef26"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705466 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"e0727d0b39a529fb8d1b358506430cfadc5ad622b225c8682b28851f4f86f457"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705482 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sdjgw" event={"ID":"c0ccdffb-2e23-428a-8423-b08f9d708b15","Type":"ContainerStarted","Data":"a8bcd34e890bf8baff2160ccc56d1efb92d9851face19b27f5725766ed4a4092"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705502 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sdjgw" event={"ID":"c0ccdffb-2e23-428a-8423-b08f9d708b15","Type":"ContainerStarted","Data":"4153a34fc70c84e28901c6d9a8c825025a078ae491f5251ba9eeb4aa7c025514"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705519 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"3c293bd4ba0e83b7d84f57ec967d7e3e831e0b64cdcb433d2fe983f54587848b"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.705536 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"76cdc37d10e77ba11e615e1ec658c7f4c41dfcb9153b41d21b1a640d0eb853f0"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.715863 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.715952 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.715992 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.716083 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716223 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716261 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716301 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716325 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716366 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.716327566 +0000 UTC m=+90.827863496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716404 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.716382878 +0000 UTC m=+90.827918838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716426 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716473 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716496 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.716605 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.716575343 +0000 UTC m=+90.828111263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.717263 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: E0130 00:12:05.717371 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.717348605 +0000 UTC m=+90.828884505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.717373 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.725005 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m2xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff05938-ab46-4a8d-ba5d-d583eac37163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lb62d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m2xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.738680 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cgq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.756725 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58962898-db76-4092-9fd2-6ee041453295\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://821751418b1c5520e37391e8725d8ce1d3b5e1a6c4904587df7e9523af49ec05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://aec9c97c3cc2d8213a5562ed88f952b05cf8c3d680a573498ad7b11259cf9a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://465448a262b54efe8e7d250fdbc015c4980c5fe972cce80cc5b93ac3b5fbb74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ca5cc06dc5d68e32e4afff843811d1c9a18c194cd728caf0b991d8afe748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb9268a4c90e72b2cc87518edaf2e2d38186097e11994c07eef72b31deaf5f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.768403 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"532769ff-9767-48cd-8c80-07c96da318f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:11:47.150117 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:47.150348 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:47.151748 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3419708539/tls.crt::/tmp/serving-cert-3419708539/tls.key\\\\\\\"\\\\nI0130 00:11:47.930472 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:47.932364 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:47.932382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:47.932413 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:47.932420 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:47.936849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 00:11:47.936865 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:47.936880 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:47.936897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:47.936900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:47.936903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:47.939460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.776673 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.776776 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.776804 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.776837 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.776862 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.778580 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.788424 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.797730 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.812128 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-drphl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55tjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-drphl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.834555 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cdnjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.843138 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef32555a-37d0-4ff7-80d6-3d572916786f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-vlmjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.850912 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20183f5d-15d9-4a2e-afab-ba81d49aae6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6f145d8fb662efd4297227d05be0be66559525a069a56f8766ddf99188e96072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.860413 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa751395-67ab-4cce-8dbb-9f2ba6c32b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://593e17d2b7b52cdae7ea597a23e84ff0bf2aa60c375f9aca06dcd08c9e3f62e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b224c5fd1d4850a504ea24d2a7a69f9bc69c770196bb142ca72970d03830cb31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c69ec53206c2bd047ddabdee78ed4f580ff7c5dab223808d8d5f78ea3efadbd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.868484 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.879779 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-sdjgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0ccdffb-2e23-428a-8423-b08f9d708b15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8bcd34e890bf8baff2160ccc56d1efb92d9851face19b27f5725766ed4a4092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rprhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sdjgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.880047 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.880102 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.880123 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.880148 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.880165 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.890923 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3965caad-c581-45b3-88e0-99b4039659c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z8qm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.902729 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7401198e-bb3b-4751-8aa5-cd73dd7f11b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7316893852c737b0d9ba4d82f95e30368750d3de645e594c803519f4536f5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c3ab95093b37cc80e5bd368dd2136ddd5b4f4f24601b417cc1a9d1105b99471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4df65a3ddf5bacacb01f75935c3483e4e65c115d77a32405d17da0426f4989e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.914501 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.952904 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q7tcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09afae3-bd41-4f19-af49-34689367f229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q7tcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.978282 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.985021 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.985074 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.985086 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.985104 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.985114 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:05Z","lastTransitionTime":"2026-01-30T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:05 crc kubenswrapper[5117]: I0130 00:12:05.995003 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m2xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff05938-ab46-4a8d-ba5d-d583eac37163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://da3577b168a7e9b883c0f39aaa8c2829941319ca9ff4ef35eee8ae3dfa338d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lb62d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m2xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.010538 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cgq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.018642 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5117]: E0130 00:12:06.018957 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.018931027 +0000 UTC m=+91.130466927 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.030636 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58962898-db76-4092-9fd2-6ee041453295\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://821751418b1c5520e37391e8725d8ce1d3b5e1a6c4904587df7e9523af49ec05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://aec9c97c3cc2d8213a5562ed88f952b05cf8c3d680a573498ad7b11259cf9a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://465448a262b54efe8e7d250fdbc015c4980c5fe972cce80cc5b93ac3b5fbb74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ca5cc06dc5d68e32e4afff843811d1c9a18c194cd728caf0b991d8afe748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb9268a4c90e72b2cc87518edaf2e2d38186097e11994c07eef72b31deaf5f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.036273 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.036304 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.036291 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:06 crc kubenswrapper[5117]: E0130 00:12:06.036445 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:06 crc kubenswrapper[5117]: E0130 00:12:06.036641 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.036846 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:06 crc kubenswrapper[5117]: E0130 00:12:06.036948 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:06 crc kubenswrapper[5117]: E0130 00:12:06.036839 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.044481 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"532769ff-9767-48cd-8c80-07c96da318f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:11:47.150117 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:47.150348 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:47.151748 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3419708539/tls.crt::/tmp/serving-cert-3419708539/tls.key\\\\\\\"\\\\nI0130 00:11:47.930472 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:47.932364 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:47.932382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:47.932413 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:47.932420 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:47.936849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 00:11:47.936865 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:47.936880 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:47.936897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:47.936900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:47.936903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:47.939460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.057096 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://671fb92df0ecdc7e8c13a14b774ebb3aff352f651733cce2bf128ef55d52ef26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.065977 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.075449 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.081326 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-drphl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0882411f3a83562f84a795fcea42b40579c915d890151da22a39da43e790764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55tjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-drphl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.087478 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.087517 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.087530 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.087546 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.087556 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.096791 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cdnjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.105100 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef32555a-37d0-4ff7-80d6-3d572916786f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-vlmjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.114288 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20183f5d-15d9-4a2e-afab-ba81d49aae6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6f145d8fb662efd4297227d05be0be66559525a069a56f8766ddf99188e96072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.119827 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:06 crc kubenswrapper[5117]: E0130 00:12:06.119999 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:06 crc kubenswrapper[5117]: E0130 00:12:06.120118 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs podName:a09afae3-bd41-4f19-af49-34689367f229 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.120088162 +0000 UTC m=+91.231624162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs") pod "network-metrics-daemon-q7tcw" (UID: "a09afae3-bd41-4f19-af49-34689367f229") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.127633 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa751395-67ab-4cce-8dbb-9f2ba6c32b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://593e17d2b7b52cdae7ea597a23e84ff0bf2aa60c375f9aca06dcd08c9e3f62e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b224c5fd1d4850a504ea24d2a7a69f9bc69c770196bb142ca72970d03830cb31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c69ec53206c2bd047ddabdee78ed4f580ff7c5dab223808d8d5f78ea3efadbd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.136421 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.149671 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-sdjgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0ccdffb-2e23-428a-8423-b08f9d708b15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8bcd34e890bf8baff2160ccc56d1efb92d9851face19b27f5725766ed4a4092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rprhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sdjgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.159559 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3965caad-c581-45b3-88e0-99b4039659c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z8qm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.171460 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7401198e-bb3b-4751-8aa5-cd73dd7f11b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7316893852c737b0d9ba4d82f95e30368750d3de645e594c803519f4536f5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c3ab95093b37cc80e5bd368dd2136ddd5b4f4f24601b417cc1a9d1105b99471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4df65a3ddf5bacacb01f75935c3483e4e65c115d77a32405d17da0426f4989e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.182449 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.190251 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.190290 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.190300 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.190317 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.190328 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.191553 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q7tcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09afae3-bd41-4f19-af49-34689367f229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q7tcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.257264 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"2afc5a802f14dafef90781946554fb393ce8280821373eccba43a6bdc90c790f"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.259848 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerStarted","Data":"cb83c5906fbd10a0166e94364ccd7f075035f51e8b22b7a9ee7691325def98cd"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.261652 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.263757 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" event={"ID":"ef32555a-37d0-4ff7-80d6-3d572916786f","Type":"ContainerStarted","Data":"c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.263791 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" event={"ID":"ef32555a-37d0-4ff7-80d6-3d572916786f","Type":"ContainerStarted","Data":"762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.265306 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"64207d76cffa8f426761066265d9c276f11e17202d0b999399fc74465aa148b9"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.277425 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3965caad-c581-45b3-88e0-99b4039659c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2afc5a802f14dafef90781946554fb393ce8280821373eccba43a6bdc90c790f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3c293bd4ba0e83b7d84f57ec967d7e3e831e0b64cdcb433d2fe983f54587848b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7tj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z8qm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.291950 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7401198e-bb3b-4751-8aa5-cd73dd7f11b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7316893852c737b0d9ba4d82f95e30368750d3de645e594c803519f4536f5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c3ab95093b37cc80e5bd368dd2136ddd5b4f4f24601b417cc1a9d1105b99471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4df65a3ddf5bacacb01f75935c3483e4e65c115d77a32405d17da0426f4989e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64e37271d65114047eb1033f869e95083f3ce8d42b99ace26fb58a79b90da727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.292105 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.292146 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.292155 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.292189 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.292204 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.307172 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.318531 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q7tcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09afae3-bd41-4f19-af49-34689367f229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knhxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q7tcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.329586 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.342220 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m2xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff05938-ab46-4a8d-ba5d-d583eac37163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://da3577b168a7e9b883c0f39aaa8c2829941319ca9ff4ef35eee8ae3dfa338d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lb62d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m2xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.361356 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cgq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.395094 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.395147 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.395157 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.395176 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.395186 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.401305 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58962898-db76-4092-9fd2-6ee041453295\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://821751418b1c5520e37391e8725d8ce1d3b5e1a6c4904587df7e9523af49ec05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://aec9c97c3cc2d8213a5562ed88f952b05cf8c3d680a573498ad7b11259cf9a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://465448a262b54efe8e7d250fdbc015c4980c5fe972cce80cc5b93ac3b5fbb74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ca5cc06dc5d68e32e4afff843811d1c9a18c194cd728caf0b991d8afe748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb9268a4c90e72b2cc87518edaf2e2d38186097e11994c07eef72b31deaf5f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.417973 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"532769ff-9767-48cd-8c80-07c96da318f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:11:47.150117 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:47.150348 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:47.151748 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3419708539/tls.crt::/tmp/serving-cert-3419708539/tls.key\\\\\\\"\\\\nI0130 00:11:47.930472 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:47.932364 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:47.932382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:47.932413 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:47.932420 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:47.936849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 00:11:47.936865 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:47.936880 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:47.936897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:47.936900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:47.936903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:47.939460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.436956 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://671fb92df0ecdc7e8c13a14b774ebb3aff352f651733cce2bf128ef55d52ef26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.471084 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.499823 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.499952 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.499969 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.499988 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.500001 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.505661 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.545197 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-drphl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0882411f3a83562f84a795fcea42b40579c915d890151da22a39da43e790764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55tjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-drphl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.594753 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cdnjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.602553 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.602612 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.602635 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.602663 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.602682 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.628392 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef32555a-37d0-4ff7-80d6-3d572916786f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-vlmjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.670703 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20183f5d-15d9-4a2e-afab-ba81d49aae6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6f145d8fb662efd4297227d05be0be66559525a069a56f8766ddf99188e96072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dbfa42d4d11914166003e31d961fd95b2941621e5bde3663323b1e770ef00df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.705344 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.705405 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.705421 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.705442 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.705459 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.713068 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa751395-67ab-4cce-8dbb-9f2ba6c32b69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://593e17d2b7b52cdae7ea597a23e84ff0bf2aa60c375f9aca06dcd08c9e3f62e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b224c5fd1d4850a504ea24d2a7a69f9bc69c770196bb142ca72970d03830cb31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c69ec53206c2bd047ddabdee78ed4f580ff7c5dab223808d8d5f78ea3efadbd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.750830 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.793338 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-sdjgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0ccdffb-2e23-428a-8423-b08f9d708b15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8bcd34e890bf8baff2160ccc56d1efb92d9851face19b27f5725766ed4a4092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rprhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sdjgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.808860 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.808949 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.808971 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.809002 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.809022 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.829574 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64207d76cffa8f426761066265d9c276f11e17202d0b999399fc74465aa148b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a536d4579e70b40253cab8993e7d9aab9bf9b603c56f468bccbd0b0c0104268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.867200 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m2xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff05938-ab46-4a8d-ba5d-d583eac37163\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://da3577b168a7e9b883c0f39aaa8c2829941319ca9ff4ef35eee8ae3dfa338d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lb62d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m2xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.911343 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.911390 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.911404 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.911423 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.911437 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:06Z","lastTransitionTime":"2026-01-30T00:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.914015 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1924d1c-fa4c-4d24-8885-d545bbb1c47e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb83c5906fbd10a0166e94364ccd7f075035f51e8b22b7a9ee7691325def98cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cgq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.957619 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58962898-db76-4092-9fd2-6ee041453295\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://821751418b1c5520e37391e8725d8ce1d3b5e1a6c4904587df7e9523af49ec05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://aec9c97c3cc2d8213a5562ed88f952b05cf8c3d680a573498ad7b11259cf9a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://465448a262b54efe8e7d250fdbc015c4980c5fe972cce80cc5b93ac3b5fbb74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ca5cc06dc5d68e32e4afff843811d1c9a18c194cd728caf0b991d8afe748e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cb9268a4c90e72b2cc87518edaf2e2d38186097e11994c07eef72b31deaf5f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daa15cfea4a3cc35b4fb6f183735df4f59bdc4cabcbd8ecda2a438340190abf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9b6e75c4f68b33a957fba2cb178da8c8a3b88083eb5d3adfafe86eb8c93ec27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ec5b54c1c22969a0a9b666eafeaa7be54e1427ba29d8845fa7501752a31a0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5117]: I0130 00:12:06.989734 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"532769ff-9767-48cd-8c80-07c96da318f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:47Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0130 00:11:47.150117 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 00:11:47.150348 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0130 00:11:47.151748 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3419708539/tls.crt::/tmp/serving-cert-3419708539/tls.key\\\\\\\"\\\\nI0130 00:11:47.930472 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 00:11:47.932364 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 00:11:47.932382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 00:11:47.932413 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 00:11:47.932420 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 00:11:47.936849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 00:11:47.936865 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 00:11:47.936880 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 00:11:47.936893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 00:11:47.936897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 00:11:47.936900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 00:11:47.936903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 00:11:47.939460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.014222 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.014304 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.014325 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.014352 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.014373 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.031295 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://671fb92df0ecdc7e8c13a14b774ebb3aff352f651733cce2bf128ef55d52ef26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.075141 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.114535 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.117310 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.117378 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.117408 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.117445 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.117471 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.148569 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-drphl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee02588-f6ac-4300-9cbb-17e3a0b80e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0882411f3a83562f84a795fcea42b40579c915d890151da22a39da43e790764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55tjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-drphl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.200661 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpvmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cdnjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.219518 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.219575 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.219587 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.219605 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.219619 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.226078 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef32555a-37d0-4ff7-80d6-3d572916786f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:12:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hklp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:12:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-vlmjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.270626 5117 generic.go:358] "Generic (PLEG): container finished" podID="b1924d1c-fa4c-4d24-8885-d545bbb1c47e" containerID="cb83c5906fbd10a0166e94364ccd7f075035f51e8b22b7a9ee7691325def98cd" exitCode=0 Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.270704 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerDied","Data":"cb83c5906fbd10a0166e94364ccd7f075035f51e8b22b7a9ee7691325def98cd"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.274479 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.274517 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.274527 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.313462 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.313441033 podStartE2EDuration="4.313441033s" podCreationTimestamp="2026-01-30 00:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.285093563 +0000 UTC m=+90.396629453" watchObservedRunningTime="2026-01-30 00:12:07.313441033 +0000 UTC m=+90.424976913" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.322389 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.322428 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.322439 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.322453 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.322467 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.352775 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=4.352753863 podStartE2EDuration="4.352753863s" podCreationTimestamp="2026-01-30 00:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.313626908 +0000 UTC m=+90.425162798" watchObservedRunningTime="2026-01-30 00:12:07.352753863 +0000 UTC m=+90.464289743" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.396841 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sdjgw" podStartSLOduration=66.396818186 podStartE2EDuration="1m6.396818186s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.396638711 +0000 UTC m=+90.508174611" watchObservedRunningTime="2026-01-30 00:12:07.396818186 +0000 UTC m=+90.508354076" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.425539 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.425592 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.425603 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.425622 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.425634 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.469876 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=4.469851438 podStartE2EDuration="4.469851438s" podCreationTimestamp="2026-01-30 00:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.46957738 +0000 UTC m=+90.581113270" watchObservedRunningTime="2026-01-30 00:12:07.469851438 +0000 UTC m=+90.581387328" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.470069 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podStartSLOduration=67.470064364 podStartE2EDuration="1m7.470064364s" podCreationTimestamp="2026-01-30 00:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.430563579 +0000 UTC m=+90.542099479" watchObservedRunningTime="2026-01-30 00:12:07.470064364 +0000 UTC m=+90.581600254" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.527279 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.527325 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.527338 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.527356 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.527366 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.627310 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5m2xx" podStartSLOduration=67.627271111 podStartE2EDuration="1m7.627271111s" podCreationTimestamp="2026-01-30 00:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.626961212 +0000 UTC m=+90.738497122" watchObservedRunningTime="2026-01-30 00:12:07.627271111 +0000 UTC m=+90.738807011" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.632203 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.632263 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.632279 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.632300 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.632316 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.720387 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.720366318 podStartE2EDuration="4.720366318s" podCreationTimestamp="2026-01-30 00:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.719620187 +0000 UTC m=+90.831156097" watchObservedRunningTime="2026-01-30 00:12:07.720366318 +0000 UTC m=+90.831902208" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.739527 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.740097 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.740112 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.740133 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.740147 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.743809 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.743898 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.743934 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.743970 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744084 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744164 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.744142169 +0000 UTC m=+94.855678059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744605 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744622 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744637 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744672 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.744660784 +0000 UTC m=+94.856196674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744766 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744796 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.744788528 +0000 UTC m=+94.856324418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744848 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744858 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744866 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:07 crc kubenswrapper[5117]: E0130 00:12:07.744890 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.7448832 +0000 UTC m=+94.856419100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.842320 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.842427 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.842440 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.842456 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.842467 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.909013 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-drphl" podStartSLOduration=67.908991012 podStartE2EDuration="1m7.908991012s" podCreationTimestamp="2026-01-30 00:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.90855385 +0000 UTC m=+91.020089740" watchObservedRunningTime="2026-01-30 00:12:07.908991012 +0000 UTC m=+91.020526902" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.944462 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.944511 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.944523 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.944541 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.944552 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:07Z","lastTransitionTime":"2026-01-30T00:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:07 crc kubenswrapper[5117]: I0130 00:12:07.989814 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" podStartSLOduration=66.989785092 podStartE2EDuration="1m6.989785092s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:07.987336203 +0000 UTC m=+91.098872093" watchObservedRunningTime="2026-01-30 00:12:07.989785092 +0000 UTC m=+91.101320982" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.037097 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.037127 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.037097 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.037243 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:08 crc kubenswrapper[5117]: E0130 00:12:08.037270 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:08 crc kubenswrapper[5117]: E0130 00:12:08.037304 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:08 crc kubenswrapper[5117]: E0130 00:12:08.037476 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:08 crc kubenswrapper[5117]: E0130 00:12:08.037607 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.046711 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5117]: E0130 00:12:08.046919 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.046885604 +0000 UTC m=+95.158421524 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.047736 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.047770 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.047783 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.047798 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.047810 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:08Z","lastTransitionTime":"2026-01-30T00:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.147706 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:08 crc kubenswrapper[5117]: E0130 00:12:08.147904 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:08 crc kubenswrapper[5117]: E0130 00:12:08.148000 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs podName:a09afae3-bd41-4f19-af49-34689367f229 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.147975817 +0000 UTC m=+95.259511707 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs") pod "network-metrics-daemon-q7tcw" (UID: "a09afae3-bd41-4f19-af49-34689367f229") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.149936 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.149972 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.149986 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.150006 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.150019 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:08Z","lastTransitionTime":"2026-01-30T00:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.251994 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.252037 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.252047 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.252062 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.252073 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:08Z","lastTransitionTime":"2026-01-30T00:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.281141 5117 generic.go:358] "Generic (PLEG): container finished" podID="b1924d1c-fa4c-4d24-8885-d545bbb1c47e" containerID="7f12e58974e8d92426324f1f2859cb2925f3f38a7656acd604487c6cb6e0d17d" exitCode=0 Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.281195 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerDied","Data":"7f12e58974e8d92426324f1f2859cb2925f3f38a7656acd604487c6cb6e0d17d"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.285022 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.285067 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.286159 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"d67b8447ab453d40f6f16866d8226de5450ee8d9f9bfb7b03f43f4cee537b167"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.353803 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.353864 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.353880 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.353902 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.353916 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:08Z","lastTransitionTime":"2026-01-30T00:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.408434 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.408486 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.408496 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.408513 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.408526 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:12:08Z","lastTransitionTime":"2026-01-30T00:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.447810 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk"] Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.452283 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.456662 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.457203 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.457592 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.458501 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.551562 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75c64597-ea19-4e5a-81b9-472206e913d4-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.551597 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c64597-ea19-4e5a-81b9-472206e913d4-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.551636 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/75c64597-ea19-4e5a-81b9-472206e913d4-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.551654 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75c64597-ea19-4e5a-81b9-472206e913d4-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.551704 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/75c64597-ea19-4e5a-81b9-472206e913d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.653049 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75c64597-ea19-4e5a-81b9-472206e913d4-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.653095 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c64597-ea19-4e5a-81b9-472206e913d4-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.653129 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/75c64597-ea19-4e5a-81b9-472206e913d4-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.653145 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75c64597-ea19-4e5a-81b9-472206e913d4-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.653176 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/75c64597-ea19-4e5a-81b9-472206e913d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.653577 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/75c64597-ea19-4e5a-81b9-472206e913d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.653733 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/75c64597-ea19-4e5a-81b9-472206e913d4-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.654128 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/75c64597-ea19-4e5a-81b9-472206e913d4-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.662120 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75c64597-ea19-4e5a-81b9-472206e913d4-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.674623 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c64597-ea19-4e5a-81b9-472206e913d4-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-429tk\" (UID: \"75c64597-ea19-4e5a-81b9-472206e913d4\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: I0130 00:12:08.797538 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" Jan 30 00:12:08 crc kubenswrapper[5117]: W0130 00:12:08.824248 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75c64597_ea19_4e5a_81b9_472206e913d4.slice/crio-1cdfa88ec0e75efbde4f136e3b5fa7716fda9ed3d19a8212d7c34c0c69a83aef WatchSource:0}: Error finding container 1cdfa88ec0e75efbde4f136e3b5fa7716fda9ed3d19a8212d7c34c0c69a83aef: Status 404 returned error can't find the container with id 1cdfa88ec0e75efbde4f136e3b5fa7716fda9ed3d19a8212d7c34c0c69a83aef Jan 30 00:12:09 crc kubenswrapper[5117]: I0130 00:12:09.037997 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 30 00:12:09 crc kubenswrapper[5117]: I0130 00:12:09.046628 5117 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:12:09 crc kubenswrapper[5117]: I0130 00:12:09.292347 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" event={"ID":"75c64597-ea19-4e5a-81b9-472206e913d4","Type":"ContainerStarted","Data":"ab6d82cf281e69e062a658a9bf4acfb86949d5b17e5bdc63cd0ecaa877ddb67f"} Jan 30 00:12:09 crc kubenswrapper[5117]: I0130 00:12:09.292404 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" event={"ID":"75c64597-ea19-4e5a-81b9-472206e913d4","Type":"ContainerStarted","Data":"1cdfa88ec0e75efbde4f136e3b5fa7716fda9ed3d19a8212d7c34c0c69a83aef"} Jan 30 00:12:09 crc kubenswrapper[5117]: I0130 00:12:09.296054 5117 generic.go:358] "Generic (PLEG): container finished" podID="b1924d1c-fa4c-4d24-8885-d545bbb1c47e" containerID="12bddf98137d597579d537e698a97ed8972af88a556694646e01186d5b6e4351" exitCode=0 Jan 30 00:12:09 crc kubenswrapper[5117]: I0130 00:12:09.296227 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerDied","Data":"12bddf98137d597579d537e698a97ed8972af88a556694646e01186d5b6e4351"} Jan 30 00:12:09 crc kubenswrapper[5117]: I0130 00:12:09.342638 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-429tk" podStartSLOduration=68.342606933 podStartE2EDuration="1m8.342606933s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:09.311110045 +0000 UTC m=+92.422645985" watchObservedRunningTime="2026-01-30 00:12:09.342606933 +0000 UTC m=+92.454142873" Jan 30 00:12:10 crc kubenswrapper[5117]: I0130 00:12:10.036640 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:10 crc kubenswrapper[5117]: E0130 00:12:10.036798 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:10 crc kubenswrapper[5117]: I0130 00:12:10.036861 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:10 crc kubenswrapper[5117]: E0130 00:12:10.037115 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:10 crc kubenswrapper[5117]: I0130 00:12:10.037261 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:10 crc kubenswrapper[5117]: E0130 00:12:10.037503 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:10 crc kubenswrapper[5117]: I0130 00:12:10.038866 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:10 crc kubenswrapper[5117]: E0130 00:12:10.039284 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:10 crc kubenswrapper[5117]: I0130 00:12:10.303015 5117 generic.go:358] "Generic (PLEG): container finished" podID="b1924d1c-fa4c-4d24-8885-d545bbb1c47e" containerID="177ce938839eadda47d2831a6fc489f304200fbfb7c959ee7a302bff7e6f6e1d" exitCode=0 Jan 30 00:12:10 crc kubenswrapper[5117]: I0130 00:12:10.303119 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerDied","Data":"177ce938839eadda47d2831a6fc489f304200fbfb7c959ee7a302bff7e6f6e1d"} Jan 30 00:12:10 crc kubenswrapper[5117]: I0130 00:12:10.320005 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} Jan 30 00:12:11 crc kubenswrapper[5117]: I0130 00:12:11.327560 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerStarted","Data":"79163545d15895ea6b07d142687f2cebca275c0862218df3c6829fe5993f98b1"} Jan 30 00:12:11 crc kubenswrapper[5117]: I0130 00:12:11.790859 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:11 crc kubenswrapper[5117]: I0130 00:12:11.790936 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:11 crc kubenswrapper[5117]: I0130 00:12:11.790992 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:11 crc kubenswrapper[5117]: I0130 00:12:11.791049 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791127 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791205 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791215 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791222 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791244 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791229 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.791206533 +0000 UTC m=+102.902742423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791293 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.791282565 +0000 UTC m=+102.902818455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791128 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791308 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.791300076 +0000 UTC m=+102.902835966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791330 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791352 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:11 crc kubenswrapper[5117]: E0130 00:12:11.791414 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.791387468 +0000 UTC m=+102.902923398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.036336 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.036387 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.036505 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.036523 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:12 crc kubenswrapper[5117]: E0130 00:12:12.037375 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:12 crc kubenswrapper[5117]: E0130 00:12:12.037479 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:12 crc kubenswrapper[5117]: E0130 00:12:12.037592 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:12 crc kubenswrapper[5117]: E0130 00:12:12.037784 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.095171 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5117]: E0130 00:12:12.095437 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.095395988 +0000 UTC m=+103.206931888 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.196747 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:12 crc kubenswrapper[5117]: E0130 00:12:12.197000 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:12 crc kubenswrapper[5117]: E0130 00:12:12.197151 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs podName:a09afae3-bd41-4f19-af49-34689367f229 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.197119869 +0000 UTC m=+103.308655839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs") pod "network-metrics-daemon-q7tcw" (UID: "a09afae3-bd41-4f19-af49-34689367f229") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.335233 5117 generic.go:358] "Generic (PLEG): container finished" podID="b1924d1c-fa4c-4d24-8885-d545bbb1c47e" containerID="79163545d15895ea6b07d142687f2cebca275c0862218df3c6829fe5993f98b1" exitCode=0 Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.335331 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerDied","Data":"79163545d15895ea6b07d142687f2cebca275c0862218df3c6829fe5993f98b1"} Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.342366 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerStarted","Data":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.355496 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.355547 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.385545 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:12 crc kubenswrapper[5117]: I0130 00:12:12.436919 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podStartSLOduration=71.436872585 podStartE2EDuration="1m11.436872585s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:12.40231918 +0000 UTC m=+95.513855090" watchObservedRunningTime="2026-01-30 00:12:12.436872585 +0000 UTC m=+95.548408485" Jan 30 00:12:13 crc kubenswrapper[5117]: I0130 00:12:13.350219 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerStarted","Data":"b1b40262d81520a6da4288ba53433ccd3f8a70ce6ead1a94682d4ca6a3a2368d"} Jan 30 00:12:13 crc kubenswrapper[5117]: I0130 00:12:13.351272 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:13 crc kubenswrapper[5117]: I0130 00:12:13.376212 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.037467 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.037565 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.037464 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:14 crc kubenswrapper[5117]: E0130 00:12:14.037774 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.037926 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:14 crc kubenswrapper[5117]: E0130 00:12:14.038120 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:14 crc kubenswrapper[5117]: E0130 00:12:14.038353 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:14 crc kubenswrapper[5117]: E0130 00:12:14.038451 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.361813 5117 generic.go:358] "Generic (PLEG): container finished" podID="b1924d1c-fa4c-4d24-8885-d545bbb1c47e" containerID="b1b40262d81520a6da4288ba53433ccd3f8a70ce6ead1a94682d4ca6a3a2368d" exitCode=0 Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.362061 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerDied","Data":"b1b40262d81520a6da4288ba53433ccd3f8a70ce6ead1a94682d4ca6a3a2368d"} Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.362178 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cgq54" event={"ID":"b1924d1c-fa4c-4d24-8885-d545bbb1c47e","Type":"ContainerStarted","Data":"475a1b53df3a69d86ec9397081b40739a3b1e0f1eeef174b3600b06bd93460f6"} Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.409438 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cgq54" podStartSLOduration=73.409416248 podStartE2EDuration="1m13.409416248s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.40807607 +0000 UTC m=+97.519611970" watchObservedRunningTime="2026-01-30 00:12:14.409416248 +0000 UTC m=+97.520952198" Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.950120 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q7tcw"] Jan 30 00:12:14 crc kubenswrapper[5117]: I0130 00:12:14.950323 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:14 crc kubenswrapper[5117]: E0130 00:12:14.950444 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:16 crc kubenswrapper[5117]: I0130 00:12:16.036940 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:16 crc kubenswrapper[5117]: I0130 00:12:16.036944 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:16 crc kubenswrapper[5117]: E0130 00:12:16.037420 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:16 crc kubenswrapper[5117]: I0130 00:12:16.036976 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:16 crc kubenswrapper[5117]: I0130 00:12:16.036957 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:16 crc kubenswrapper[5117]: E0130 00:12:16.037495 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:16 crc kubenswrapper[5117]: E0130 00:12:16.037557 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:16 crc kubenswrapper[5117]: E0130 00:12:16.037661 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:17 crc kubenswrapper[5117]: I0130 00:12:17.263363 5117 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:12:18 crc kubenswrapper[5117]: I0130 00:12:18.037050 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:18 crc kubenswrapper[5117]: I0130 00:12:18.037144 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:18 crc kubenswrapper[5117]: I0130 00:12:18.037246 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:18 crc kubenswrapper[5117]: E0130 00:12:18.037275 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:18 crc kubenswrapper[5117]: E0130 00:12:18.037418 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q7tcw" podUID="a09afae3-bd41-4f19-af49-34689367f229" Jan 30 00:12:18 crc kubenswrapper[5117]: E0130 00:12:18.037815 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:12:18 crc kubenswrapper[5117]: I0130 00:12:18.038006 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:18 crc kubenswrapper[5117]: E0130 00:12:18.038456 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:18 crc kubenswrapper[5117]: I0130 00:12:18.038669 5117 scope.go:117] "RemoveContainer" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" Jan 30 00:12:18 crc kubenswrapper[5117]: E0130 00:12:18.039079 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:18 crc kubenswrapper[5117]: I0130 00:12:18.967327 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 30 00:12:18 crc kubenswrapper[5117]: I0130 00:12:18.968947 5117 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.014138 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-scnb9"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.108158 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.111959 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.113174 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g7kqs"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.113967 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.114092 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.114632 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.114683 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.114974 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.115097 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.115345 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.115468 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.115676 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.131867 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.185548 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.185700 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.188252 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.188574 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.200900 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.200943 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-serving-cert\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201012 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sknzd\" (UniqueName: \"kubernetes.io/projected/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-kube-api-access-sknzd\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201043 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-etcd-client\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201070 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-audit-dir\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201240 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-image-import-ca\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201277 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-encryption-config\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201302 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201343 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-node-pullsecrets\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201369 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-config\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.201390 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-audit\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.254758 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.254895 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.254758 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.255648 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-b52fx"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.258846 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.259482 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.259750 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260019 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260250 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260484 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260622 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260803 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260843 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260903 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260931 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.260960 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.261064 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.261179 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.269620 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.274924 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.275031 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.277498 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.278773 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.278943 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.279075 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.280229 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.280301 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.301848 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkm8\" (UniqueName: \"kubernetes.io/projected/eb191c78-b1b1-4b69-b609-210416eb3356-kube-api-access-zbkm8\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302071 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-client-ca\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302157 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fv4g\" (UniqueName: \"kubernetes.io/projected/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-kube-api-access-9fv4g\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302238 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-image-import-ca\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302278 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-serving-cert\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302323 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-encryption-config\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302359 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302416 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-node-pullsecrets\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302468 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-config\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302517 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-audit\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302556 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302602 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.302653 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-serving-cert\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303368 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-config\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303412 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-tmp\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303442 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sknzd\" (UniqueName: \"kubernetes.io/projected/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-kube-api-access-sknzd\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303490 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-config\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303516 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb191c78-b1b1-4b69-b609-210416eb3356-tmp\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303541 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb191c78-b1b1-4b69-b609-210416eb3356-serving-cert\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303562 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-client-ca\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303584 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-etcd-client\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303605 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-audit-dir\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.303663 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-audit-dir\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.304117 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-node-pullsecrets\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.304257 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.304304 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-image-import-ca\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.304452 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-config\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.305098 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-audit\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.307023 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.313015 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-encryption-config\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.313175 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-etcd-client\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.314612 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-serving-cert\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.322323 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sknzd\" (UniqueName: \"kubernetes.io/projected/fbfdc6c4-be51-4e2c-8ed3-44424ccde813-kube-api-access-sknzd\") pod \"apiserver-9ddfb9f55-scnb9\" (UID: \"fbfdc6c4-be51-4e2c-8ed3-44424ccde813\") " pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.405102 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkm8\" (UniqueName: \"kubernetes.io/projected/eb191c78-b1b1-4b69-b609-210416eb3356-kube-api-access-zbkm8\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.405305 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682ed001-72d5-49dd-80bc-a8bb65323efd-config\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.405353 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-client-ca\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.405407 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9fv4g\" (UniqueName: \"kubernetes.io/projected/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-kube-api-access-9fv4g\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.405450 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnzs\" (UniqueName: \"kubernetes.io/projected/682ed001-72d5-49dd-80bc-a8bb65323efd-kube-api-access-cjnzs\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.405474 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-serving-cert\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.405542 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406190 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-config\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406236 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-tmp\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406264 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/682ed001-72d5-49dd-80bc-a8bb65323efd-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406287 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-config\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406310 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb191c78-b1b1-4b69-b609-210416eb3356-tmp\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406332 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/682ed001-72d5-49dd-80bc-a8bb65323efd-images\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406367 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb191c78-b1b1-4b69-b609-210416eb3356-serving-cert\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406392 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-client-ca\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.406843 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-client-ca\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.407864 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-config\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.407894 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-config\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.408288 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.408495 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-tmp\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.409744 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-client-ca\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.410227 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb191c78-b1b1-4b69-b609-210416eb3356-tmp\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.410358 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-serving-cert\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.410476 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.410751 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.414159 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb191c78-b1b1-4b69-b609-210416eb3356-serving-cert\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.419604 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.420480 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.421517 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.421667 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.421825 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.422318 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.428009 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fv4g\" (UniqueName: \"kubernetes.io/projected/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-kube-api-access-9fv4g\") pod \"route-controller-manager-776cdc94d6-cgnvb\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.431135 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkm8\" (UniqueName: \"kubernetes.io/projected/eb191c78-b1b1-4b69-b609-210416eb3356-kube-api-access-zbkm8\") pod \"controller-manager-65b6cccf98-g7kqs\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.433701 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.438666 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.507517 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r587q\" (UniqueName: \"kubernetes.io/projected/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-kube-api-access-r587q\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.507864 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-config\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.507988 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnzs\" (UniqueName: \"kubernetes.io/projected/682ed001-72d5-49dd-80bc-a8bb65323efd-kube-api-access-cjnzs\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.508108 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.508216 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.508458 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-serving-cert\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.508558 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/682ed001-72d5-49dd-80bc-a8bb65323efd-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.508607 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/682ed001-72d5-49dd-80bc-a8bb65323efd-images\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.508745 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682ed001-72d5-49dd-80bc-a8bb65323efd-config\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.509785 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/682ed001-72d5-49dd-80bc-a8bb65323efd-config\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.509809 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/682ed001-72d5-49dd-80bc-a8bb65323efd-images\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.515699 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/682ed001-72d5-49dd-80bc-a8bb65323efd-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.529259 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnzs\" (UniqueName: \"kubernetes.io/projected/682ed001-72d5-49dd-80bc-a8bb65323efd-kube-api-access-cjnzs\") pod \"machine-api-operator-755bb95488-b52fx\" (UID: \"682ed001-72d5-49dd-80bc-a8bb65323efd\") " pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.570991 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.593974 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.597461 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.601058 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.601288 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.604279 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.604503 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.604591 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.604610 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.605118 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.609589 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r587q\" (UniqueName: \"kubernetes.io/projected/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-kube-api-access-r587q\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.609628 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-config\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.609680 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.609711 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.609773 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-serving-cert\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.611975 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.612400 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.615594 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-config\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.620959 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-serving-cert\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.631562 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r587q\" (UniqueName: \"kubernetes.io/projected/9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b-kube-api-access-r587q\") pod \"authentication-operator-7f5c659b84-m5l72\" (UID: \"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.656140 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.656366 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.660660 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.661124 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.661280 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.661370 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.661445 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.661784 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.663393 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.664864 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.665334 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.692877 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-mmnjm"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.693131 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.697001 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.697118 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.697896 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.698079 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.698197 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.698353 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.707101 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.712274 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-audit-policies\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.712537 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-audit-dir\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.712593 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-etcd-client\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.712763 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-serving-cert\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.713816 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11df2d2-8553-4697-8bcf-9a96d37bcc06-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.713883 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-trusted-ca-bundle\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.713937 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2dv\" (UniqueName: \"kubernetes.io/projected/d11df2d2-8553-4697-8bcf-9a96d37bcc06-kube-api-access-hr2dv\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.713980 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-etcd-serving-ca\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.714021 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11df2d2-8553-4697-8bcf-9a96d37bcc06-config\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.714204 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-encryption-config\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.722985 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6qc\" (UniqueName: \"kubernetes.io/projected/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-kube-api-access-vb6qc\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.729984 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgbnh"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.732208 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.736078 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.736536 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.736597 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.736751 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.739902 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.740263 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.740328 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.743367 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.745889 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.746270 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.748185 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.748776 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.752792 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.753706 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.755636 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.755675 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.755637 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.756722 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.756743 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.756798 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.756873 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.757998 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758089 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758153 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758293 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758430 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758564 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758684 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758809 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.758868 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.762169 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.768023 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-mq4qt"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.768218 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.770559 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.770587 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.770895 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.771107 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.771312 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.771436 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.771612 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.772284 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.780436 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.780675 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.787996 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.793045 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.794374 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-mq4qt" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.796245 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.796458 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.796720 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.799819 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-dvncc"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.806915 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.807092 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.817015 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.817225 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.817358 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.818523 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.819355 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.825843 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-etcd-serving-ca\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.825887 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-service-ca\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.825915 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/70053511-152f-4649-a478-cbce9a4bd8e5-auth-proxy-config\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.825940 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11df2d2-8553-4697-8bcf-9a96d37bcc06-config\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.825992 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcd2s\" (UniqueName: \"kubernetes.io/projected/235bb0bc-4887-4dfc-8a63-4f919855ef2c-kube-api-access-dcd2s\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.826022 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.826039 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.826076 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.826099 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.826117 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-serving-cert\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.826962 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-etcd-serving-ca\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.827040 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/609de4f2-7b79-438a-b5c5-a2650396bc23-trusted-ca\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.827299 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk6lc\" (UniqueName: \"kubernetes.io/projected/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-kube-api-access-gk6lc\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.827771 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9s98\" (UniqueName: \"kubernetes.io/projected/70053511-152f-4649-a478-cbce9a4bd8e5-kube-api-access-z9s98\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.827890 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/70053511-152f-4649-a478-cbce9a4bd8e5-machine-approver-tls\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.827916 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.827939 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.827959 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-encryption-config\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.828349 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11df2d2-8553-4697-8bcf-9a96d37bcc06-config\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: E0130 00:12:19.828719 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:19 crc kubenswrapper[5117]: E0130 00:12:19.828806 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.828787392 +0000 UTC m=+118.940323282 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829636 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6qc\" (UniqueName: \"kubernetes.io/projected/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-kube-api-access-vb6qc\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829674 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-policies\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829702 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829761 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609de4f2-7b79-438a-b5c5-a2650396bc23-serving-cert\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829799 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-audit-policies\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829871 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235bb0bc-4887-4dfc-8a63-4f919855ef2c-config\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829890 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829910 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-config\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: E0130 00:12:19.829912 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.829930 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfsl\" (UniqueName: \"kubernetes.io/projected/609de4f2-7b79-438a-b5c5-a2650396bc23-kube-api-access-mwfsl\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: E0130 00:12:19.829976 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.829945545 +0000 UTC m=+118.941481425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830008 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70053511-152f-4649-a478-cbce9a4bd8e5-config\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830039 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830066 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830124 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/235bb0bc-4887-4dfc-8a63-4f919855ef2c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830157 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830193 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-client\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830223 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-audit-dir\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830253 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-dir\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830288 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-etcd-client\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830312 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-serving-cert\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830355 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609de4f2-7b79-438a-b5c5-a2650396bc23-config\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830380 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/235bb0bc-4887-4dfc-8a63-4f919855ef2c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830400 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830422 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830457 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11df2d2-8553-4697-8bcf-9a96d37bcc06-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830487 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-trusted-ca-bundle\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830505 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-audit-policies\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830514 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-ca\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830543 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.830608 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-audit-dir\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.833602 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-tmp-dir\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.833640 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl96g\" (UniqueName: \"kubernetes.io/projected/2cf47fab-c86d-4283-b285-b4ca795bf6d6-kube-api-access-bl96g\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.833706 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2dv\" (UniqueName: \"kubernetes.io/projected/d11df2d2-8553-4697-8bcf-9a96d37bcc06-kube-api-access-hr2dv\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.833816 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-trusted-ca-bundle\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.838366 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.838915 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11df2d2-8553-4697-8bcf-9a96d37bcc06-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.839279 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-encryption-config\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.839282 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.840517 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.843438 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-serving-cert\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.844416 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.854638 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.855073 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-etcd-client\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.866482 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:12:19 crc kubenswrapper[5117]: W0130 00:12:19.881432 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1cd991b_8078_45cb_9591_ae3f5a4d4db4.slice/crio-9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad WatchSource:0}: Error finding container 9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad: Status 404 returned error can't find the container with id 9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.882202 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-nkcjt"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.882508 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.885879 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.909874 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6qc\" (UniqueName: \"kubernetes.io/projected/8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95-kube-api-access-vb6qc\") pod \"apiserver-8596bd845d-s2hrs\" (UID: \"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.925143 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2dv\" (UniqueName: \"kubernetes.io/projected/d11df2d2-8553-4697-8bcf-9a96d37bcc06-kube-api-access-hr2dv\") pod \"openshift-apiserver-operator-846cbfc458-6xn7s\" (UID: \"d11df2d2-8553-4697-8bcf-9a96d37bcc06\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.931138 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935072 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8kvj\" (UniqueName: \"kubernetes.io/projected/e161fe62-f260-4253-a91c-00d71e12cd51-kube-api-access-k8kvj\") pod \"downloads-747b44746d-mq4qt\" (UID: \"e161fe62-f260-4253-a91c-00d71e12cd51\") " pod="openshift-console/downloads-747b44746d-mq4qt" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935106 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-dir\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935132 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609de4f2-7b79-438a-b5c5-a2650396bc23-config\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935150 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/235bb0bc-4887-4dfc-8a63-4f919855ef2c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935167 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935183 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935208 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-oauth-serving-cert\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935228 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-ca\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935246 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935262 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-tmp-dir\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935276 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bl96g\" (UniqueName: \"kubernetes.io/projected/2cf47fab-c86d-4283-b285-b4ca795bf6d6-kube-api-access-bl96g\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935295 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935314 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-service-ca\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935329 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/70053511-152f-4649-a478-cbce9a4bd8e5-auth-proxy-config\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935351 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dcd2s\" (UniqueName: \"kubernetes.io/projected/235bb0bc-4887-4dfc-8a63-4f919855ef2c-kube-api-access-dcd2s\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935372 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4jjw\" (UniqueName: \"kubernetes.io/projected/3c09a221-05c5-4aa7-a59f-7501885dd323-kube-api-access-t4jjw\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935391 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc1146e5-d235-43a2-af92-33464c191179-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zzzhq\" (UID: \"fc1146e5-d235-43a2-af92-33464c191179\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935421 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935443 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935464 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935494 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-serving-cert\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935512 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/609de4f2-7b79-438a-b5c5-a2650396bc23-trusted-ca\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935530 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935552 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gk6lc\" (UniqueName: \"kubernetes.io/projected/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-kube-api-access-gk6lc\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935572 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9s98\" (UniqueName: \"kubernetes.io/projected/70053511-152f-4649-a478-cbce9a4bd8e5-kube-api-access-z9s98\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935591 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935613 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/70053511-152f-4649-a478-cbce9a4bd8e5-machine-approver-tls\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935633 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935650 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-service-ca\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935674 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-policies\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935700 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c09a221-05c5-4aa7-a59f-7501885dd323-console-oauth-config\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935741 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609de4f2-7b79-438a-b5c5-a2650396bc23-serving-cert\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935764 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935787 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm4mc\" (UniqueName: \"kubernetes.io/projected/fc1146e5-d235-43a2-af92-33464c191179-kube-api-access-hm4mc\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zzzhq\" (UID: \"fc1146e5-d235-43a2-af92-33464c191179\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935811 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235bb0bc-4887-4dfc-8a63-4f919855ef2c-config\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935830 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935846 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-trusted-ca-bundle\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935864 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-config\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935880 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwfsl\" (UniqueName: \"kubernetes.io/projected/609de4f2-7b79-438a-b5c5-a2650396bc23-kube-api-access-mwfsl\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935898 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70053511-152f-4649-a478-cbce9a4bd8e5-config\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935912 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935932 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935960 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-console-config\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.935990 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/235bb0bc-4887-4dfc-8a63-4f919855ef2c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.936027 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.936055 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-client\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.936079 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbhnh\" (UniqueName: \"kubernetes.io/projected/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-kube-api-access-lbhnh\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.936106 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09a221-05c5-4aa7-a59f-7501885dd323-console-serving-cert\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.936124 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-tmp\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.936204 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-dir\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.936939 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609de4f2-7b79-438a-b5c5-a2650396bc23-config\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.937999 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/70053511-152f-4649-a478-cbce9a4bd8e5-auth-proxy-config\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.938036 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-ca\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.938322 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-tmp-dir\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.939507 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-service-ca\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.940030 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.940127 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.940690 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70053511-152f-4649-a478-cbce9a4bd8e5-config\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.940789 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.941051 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/235bb0bc-4887-4dfc-8a63-4f919855ef2c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.941133 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-config\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.941118 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-policies\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.942669 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/609de4f2-7b79-438a-b5c5-a2650396bc23-trusted-ca\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.943104 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/235bb0bc-4887-4dfc-8a63-4f919855ef2c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.946266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235bb0bc-4887-4dfc-8a63-4f919855ef2c-config\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.947903 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-etcd-client\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.948035 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.948240 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/70053511-152f-4649-a478-cbce9a4bd8e5-machine-approver-tls\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.948289 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.948400 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.948403 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609de4f2-7b79-438a-b5c5-a2650396bc23-serving-cert\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.948665 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.948896 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.949876 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.950901 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-serving-cert\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.951928 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.951990 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.952480 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.969009 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-mphjp"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.970132 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.978794 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.991852 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.994570 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx"] Jan 30 00:12:19 crc kubenswrapper[5117]: I0130 00:12:19.994985 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.011821 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.021812 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.022009 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.032553 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.032955 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.036510 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037055 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hm4mc\" (UniqueName: \"kubernetes.io/projected/fc1146e5-d235-43a2-af92-33464c191179-kube-api-access-hm4mc\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zzzhq\" (UID: \"fc1146e5-d235-43a2-af92-33464c191179\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037092 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-trusted-ca-bundle\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037137 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwrr5\" (UniqueName: \"kubernetes.io/projected/bc268a8d-137f-49eb-bb96-b696fdf66ccc-kube-api-access-bwrr5\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037158 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc268a8d-137f-49eb-bb96-b696fdf66ccc-serving-cert\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037180 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-console-config\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037223 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037241 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8jd\" (UniqueName: \"kubernetes.io/projected/c273d9d0-bf2b-4efa-a942-42c772dc7f20-kube-api-access-xt8jd\") pod \"cluster-samples-operator-6b564684c8-6gn48\" (UID: \"c273d9d0-bf2b-4efa-a942-42c772dc7f20\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037262 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbhnh\" (UniqueName: \"kubernetes.io/projected/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-kube-api-access-lbhnh\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037301 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09a221-05c5-4aa7-a59f-7501885dd323-console-serving-cert\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037319 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-tmp\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037335 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k8kvj\" (UniqueName: \"kubernetes.io/projected/e161fe62-f260-4253-a91c-00d71e12cd51-kube-api-access-k8kvj\") pod \"downloads-747b44746d-mq4qt\" (UID: \"e161fe62-f260-4253-a91c-00d71e12cd51\") " pod="openshift-console/downloads-747b44746d-mq4qt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037382 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-oauth-serving-cert\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037406 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037432 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4jjw\" (UniqueName: \"kubernetes.io/projected/3c09a221-05c5-4aa7-a59f-7501885dd323-kube-api-access-t4jjw\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037473 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc1146e5-d235-43a2-af92-33464c191179-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zzzhq\" (UID: \"fc1146e5-d235-43a2-af92-33464c191179\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037507 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c273d9d0-bf2b-4efa-a942-42c772dc7f20-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6gn48\" (UID: \"c273d9d0-bf2b-4efa-a942-42c772dc7f20\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037563 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037585 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037752 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc268a8d-137f-49eb-bb96-b696fdf66ccc-available-featuregates\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037814 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-service-ca\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.037860 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c09a221-05c5-4aa7-a59f-7501885dd323-console-oauth-config\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.040381 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.041624 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-tmp\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.044089 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.044437 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-console-config\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.044615 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-service-ca\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.044654 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-oauth-serving-cert\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.045822 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc1146e5-d235-43a2-af92-33464c191179-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zzzhq\" (UID: \"fc1146e5-d235-43a2-af92-33464c191179\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.045971 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c09a221-05c5-4aa7-a59f-7501885dd323-trusted-ca-bundle\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.046080 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09a221-05c5-4aa7-a59f-7501885dd323-console-serving-cert\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.047251 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.049928 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ndwrw"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.050269 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.050741 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.050269 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.051531 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c09a221-05c5-4aa7-a59f-7501885dd323-console-oauth-config\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.052708 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: W0130 00:12:20.070859 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-4544995a1586a9dfc33b8e46543710ddd3e4a4e48c31d71bc1aac3e83c496388 WatchSource:0}: Error finding container 4544995a1586a9dfc33b8e46543710ddd3e4a4e48c31d71bc1aac3e83c496388: Status 404 returned error can't find the container with id 4544995a1586a9dfc33b8e46543710ddd3e4a4e48c31d71bc1aac3e83c496388 Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.085704 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl96g\" (UniqueName: \"kubernetes.io/projected/2cf47fab-c86d-4283-b285-b4ca795bf6d6-kube-api-access-bl96g\") pod \"oauth-openshift-66458b6674-pgbnh\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.104941 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcd2s\" (UniqueName: \"kubernetes.io/projected/235bb0bc-4887-4dfc-8a63-4f919855ef2c-kube-api-access-dcd2s\") pod \"openshift-controller-manager-operator-686468bdd5-pvm2r\" (UID: \"235bb0bc-4887-4dfc-8a63-4f919855ef2c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.116379 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-2rttq"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.117012 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.123514 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.134821 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwfsl\" (UniqueName: \"kubernetes.io/projected/609de4f2-7b79-438a-b5c5-a2650396bc23-kube-api-access-mwfsl\") pod \"console-operator-67c89758df-mmnjm\" (UID: \"609de4f2-7b79-438a-b5c5-a2650396bc23\") " pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.139069 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.139218 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc268a8d-137f-49eb-bb96-b696fdf66ccc-available-featuregates\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: E0130 00:12:20.139296 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.139213184 +0000 UTC m=+119.250749084 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.139477 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwrr5\" (UniqueName: \"kubernetes.io/projected/bc268a8d-137f-49eb-bb96-b696fdf66ccc-kube-api-access-bwrr5\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.139527 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc268a8d-137f-49eb-bb96-b696fdf66ccc-serving-cert\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140224 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c334d22-8d3f-4478-80b3-d3f4049c533f-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140262 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8jd\" (UniqueName: \"kubernetes.io/projected/c273d9d0-bf2b-4efa-a942-42c772dc7f20-kube-api-access-xt8jd\") pod \"cluster-samples-operator-6b564684c8-6gn48\" (UID: \"c273d9d0-bf2b-4efa-a942-42c772dc7f20\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140298 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c334d22-8d3f-4478-80b3-d3f4049c533f-config\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140339 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3c334d22-8d3f-4478-80b3-d3f4049c533f-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140412 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71716c09-9759-4c82-a34c-d20b59b0ed78-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140447 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71716c09-9759-4c82-a34c-d20b59b0ed78-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140477 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nhz9\" (UniqueName: \"kubernetes.io/projected/71716c09-9759-4c82-a34c-d20b59b0ed78-kube-api-access-9nhz9\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140511 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be4ec378-78db-4de0-ae65-691720b18b85-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140544 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a84db473-48ab-4f4b-a46c-a62c4db95393-tmp-dir\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140567 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k57ww\" (UniqueName: \"kubernetes.io/projected/a84db473-48ab-4f4b-a46c-a62c4db95393-kube-api-access-k57ww\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140591 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4ec378-78db-4de0-ae65-691720b18b85-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140624 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a84db473-48ab-4f4b-a46c-a62c4db95393-metrics-tls\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140645 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be4ec378-78db-4de0-ae65-691720b18b85-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140748 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c273d9d0-bf2b-4efa-a942-42c772dc7f20-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6gn48\" (UID: \"c273d9d0-bf2b-4efa-a942-42c772dc7f20\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140781 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71716c09-9759-4c82-a34c-d20b59b0ed78-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140804 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c334d22-8d3f-4478-80b3-d3f4049c533f-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.140981 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be4ec378-78db-4de0-ae65-691720b18b85-config\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.144278 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c273d9d0-bf2b-4efa-a942-42c772dc7f20-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6gn48\" (UID: \"c273d9d0-bf2b-4efa-a942-42c772dc7f20\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.146192 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk6lc\" (UniqueName: \"kubernetes.io/projected/20399794-4fdc-4e83-ac69-2b65f2a3bb2c-kube-api-access-gk6lc\") pod \"etcd-operator-69b85846b6-4bnp9\" (UID: \"20399794-4fdc-4e83-ac69-2b65f2a3bb2c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.151693 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc268a8d-137f-49eb-bb96-b696fdf66ccc-available-featuregates\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.169183 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9s98\" (UniqueName: \"kubernetes.io/projected/70053511-152f-4649-a478-cbce9a4bd8e5-kube-api-access-z9s98\") pod \"machine-approver-54c688565-xw2m5\" (UID: \"70053511-152f-4649-a478-cbce9a4bd8e5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.170807 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.174422 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc268a8d-137f-49eb-bb96-b696fdf66ccc-serving-cert\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.191203 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.192468 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.193263 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.208649 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.213315 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.224214 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.230478 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.231184 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.236581 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.236862 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247487 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c334d22-8d3f-4478-80b3-d3f4049c533f-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247532 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c334d22-8d3f-4478-80b3-d3f4049c533f-config\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247561 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3c334d22-8d3f-4478-80b3-d3f4049c533f-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247591 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71716c09-9759-4c82-a34c-d20b59b0ed78-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247615 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71716c09-9759-4c82-a34c-d20b59b0ed78-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247633 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nhz9\" (UniqueName: \"kubernetes.io/projected/71716c09-9759-4c82-a34c-d20b59b0ed78-kube-api-access-9nhz9\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247660 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be4ec378-78db-4de0-ae65-691720b18b85-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247679 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a84db473-48ab-4f4b-a46c-a62c4db95393-tmp-dir\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247694 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k57ww\" (UniqueName: \"kubernetes.io/projected/a84db473-48ab-4f4b-a46c-a62c4db95393-kube-api-access-k57ww\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247709 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4ec378-78db-4de0-ae65-691720b18b85-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247744 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a84db473-48ab-4f4b-a46c-a62c4db95393-metrics-tls\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247773 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be4ec378-78db-4de0-ae65-691720b18b85-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.247803 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.248384 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71716c09-9759-4c82-a34c-d20b59b0ed78-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.248416 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c334d22-8d3f-4478-80b3-d3f4049c533f-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.248438 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be4ec378-78db-4de0-ae65-691720b18b85-config\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.248439 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3c334d22-8d3f-4478-80b3-d3f4049c533f-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.248510 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be4ec378-78db-4de0-ae65-691720b18b85-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.248566 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a84db473-48ab-4f4b-a46c-a62c4db95393-tmp-dir\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.254565 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a09afae3-bd41-4f19-af49-34689367f229-metrics-certs\") pod \"network-metrics-daemon-q7tcw\" (UID: \"a09afae3-bd41-4f19-af49-34689367f229\") " pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.271177 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.292767 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.296450 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.297231 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.302013 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a84db473-48ab-4f4b-a46c-a62c4db95393-metrics-tls\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.307650 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.313182 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.331416 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.333144 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.353046 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.367924 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71716c09-9759-4c82-a34c-d20b59b0ed78-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.370492 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.376646 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-nlhql"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.377320 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.390179 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.395632 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.395689 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q7tcw" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.409064 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71716c09-9759-4c82-a34c-d20b59b0ed78-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.411678 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.414206 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.431668 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.432164 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.452119 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.470622 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.491320 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.501332 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c334d22-8d3f-4478-80b3-d3f4049c533f-config\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.505101 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c334d22-8d3f-4478-80b3-d3f4049c533f-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.513049 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.553136 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbhnh\" (UniqueName: \"kubernetes.io/projected/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-kube-api-access-lbhnh\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.569307 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4jjw\" (UniqueName: \"kubernetes.io/projected/3c09a221-05c5-4aa7-a59f-7501885dd323-kube-api-access-t4jjw\") pod \"console-64d44f6ddf-dvncc\" (UID: \"3c09a221-05c5-4aa7-a59f-7501885dd323\") " pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.589151 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8kvj\" (UniqueName: \"kubernetes.io/projected/e161fe62-f260-4253-a91c-00d71e12cd51-kube-api-access-k8kvj\") pod \"downloads-747b44746d-mq4qt\" (UID: \"e161fe62-f260-4253-a91c-00d71e12cd51\") " pod="openshift-console/downloads-747b44746d-mq4qt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.611429 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57b0c884-b5a1-4434-a0e9-b9b36cb88c3d-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-lj2tn\" (UID: \"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.629534 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm4mc\" (UniqueName: \"kubernetes.io/projected/fc1146e5-d235-43a2-af92-33464c191179-kube-api-access-hm4mc\") pod \"control-plane-machine-set-operator-75ffdb6fcd-zzzhq\" (UID: \"fc1146e5-d235-43a2-af92-33464c191179\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.635304 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.651557 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:12:20 crc kubenswrapper[5117]: W0130 00:12:20.659884 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod609de4f2_7b79_438a_b5c5_a2650396bc23.slice/crio-cd2e895fc8ab04b30668058c1b6f0804826ee3836d83ed2df7fb87c0ec3da739 WatchSource:0}: Error finding container cd2e895fc8ab04b30668058c1b6f0804826ee3836d83ed2df7fb87c0ec3da739: Status 404 returned error can't find the container with id cd2e895fc8ab04b30668058c1b6f0804826ee3836d83ed2df7fb87c0ec3da739 Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.670295 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.692138 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:12:20 crc kubenswrapper[5117]: W0130 00:12:20.699837 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20399794_4fdc_4e83_ac69_2b65f2a3bb2c.slice/crio-74c66648ca9e3904d86343f49834a5c554ff6d1d46271567ca6252f1599c56d7 WatchSource:0}: Error finding container 74c66648ca9e3904d86343f49834a5c554ff6d1d46271567ca6252f1599c56d7: Status 404 returned error can't find the container with id 74c66648ca9e3904d86343f49834a5c554ff6d1d46271567ca6252f1599c56d7 Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.710904 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.722006 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be4ec378-78db-4de0-ae65-691720b18b85-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.729811 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.739798 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be4ec378-78db-4de0-ae65-691720b18b85-config\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.746522 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.749675 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.758012 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.768164 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.770980 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.777798 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-mq4qt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.792067 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.830637 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8jd\" (UniqueName: \"kubernetes.io/projected/c273d9d0-bf2b-4efa-a942-42c772dc7f20-kube-api-access-xt8jd\") pod \"cluster-samples-operator-6b564684c8-6gn48\" (UID: \"c273d9d0-bf2b-4efa-a942-42c772dc7f20\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.851295 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwrr5\" (UniqueName: \"kubernetes.io/projected/bc268a8d-137f-49eb-bb96-b696fdf66ccc-kube-api-access-bwrr5\") pod \"openshift-config-operator-5777786469-nkcjt\" (UID: \"bc268a8d-137f-49eb-bb96-b696fdf66ccc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.875930 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.896102 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.896185 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.911455 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.931720 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.935514 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"4544995a1586a9dfc33b8e46543710ddd3e4a4e48c31d71bc1aac3e83c496388"} Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.935564 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" event={"ID":"eb191c78-b1b1-4b69-b609-210416eb3356","Type":"ContainerStarted","Data":"e76baec32ee4694a878130ac5c59a178acbac85a0f06a4e1b6ca8abed52ecc60"} Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.935584 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" event={"ID":"70053511-152f-4649-a478-cbce9a4bd8e5","Type":"ContainerStarted","Data":"21dc2f764c44fc5922352406a051fe5e4ede68c149a078ca39f6ef2b2221f4a9"} Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.935604 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29495520-ngpdz"] Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.935652 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.952430 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.970969 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: I0130 00:12:20.992036 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:12:20 crc kubenswrapper[5117]: W0130 00:12:20.995466 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c09a221_05c5_4aa7_a59f_7501885dd323.slice/crio-d950d89c0e3a69fb3ade5c7499f2cbc17e05775757db26f7c757567b85ebd7cb WatchSource:0}: Error finding container d950d89c0e3a69fb3ade5c7499f2cbc17e05775757db26f7c757567b85ebd7cb: Status 404 returned error can't find the container with id d950d89c0e3a69fb3ade5c7499f2cbc17e05775757db26f7c757567b85ebd7cb Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.010271 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.030150 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.050974 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.070111 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:12:21 crc kubenswrapper[5117]: W0130 00:12:21.082832 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc268a8d_137f_49eb_bb96_b696fdf66ccc.slice/crio-66c37b49f99f058fda56d95e625f3097adc91e5100949ffdd7556dbcf198c793 WatchSource:0}: Error finding container 66c37b49f99f058fda56d95e625f3097adc91e5100949ffdd7556dbcf198c793: Status 404 returned error can't find the container with id 66c37b49f99f058fda56d95e625f3097adc91e5100949ffdd7556dbcf198c793 Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.090011 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.097673 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.110071 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.131400 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.150073 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.192060 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.192073 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c334d22-8d3f-4478-80b3-d3f4049c533f-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-lzcwf\" (UID: \"3c334d22-8d3f-4478-80b3-d3f4049c533f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.194458 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.214709 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nhz9\" (UniqueName: \"kubernetes.io/projected/71716c09-9759-4c82-a34c-d20b59b0ed78-kube-api-access-9nhz9\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.232370 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71716c09-9759-4c82-a34c-d20b59b0ed78-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-7zgrx\" (UID: \"71716c09-9759-4c82-a34c-d20b59b0ed78\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.235100 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.241004 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" Jan 30 00:12:21 crc kubenswrapper[5117]: W0130 00:12:21.243734 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1146e5_d235_43a2_af92_33464c191179.slice/crio-90260d9948e35c9c3efd1ff1de578b85766596f9c898c43569aa476cc25f533c WatchSource:0}: Error finding container 90260d9948e35c9c3efd1ff1de578b85766596f9c898c43569aa476cc25f533c: Status 404 returned error can't find the container with id 90260d9948e35c9c3efd1ff1de578b85766596f9c898c43569aa476cc25f533c Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.248740 5117 request.go:752] "Waited before sending request" delay="1.000025294s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.251589 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4ec378-78db-4de0-ae65-691720b18b85-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-9c26x\" (UID: \"be4ec378-78db-4de0-ae65-691720b18b85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.257069 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.260266 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.266245 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k57ww\" (UniqueName: \"kubernetes.io/projected/a84db473-48ab-4f4b-a46c-a62c4db95393-kube-api-access-k57ww\") pod \"dns-operator-799b87ffcd-mphjp\" (UID: \"a84db473-48ab-4f4b-a46c-a62c4db95393\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.282383 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" event={"ID":"2cf47fab-c86d-4283-b285-b4ca795bf6d6","Type":"ContainerStarted","Data":"93e62e159b3d05ca267941c64b096d3f41553153e7deb2d81740f45f497368b6"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.282426 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" event={"ID":"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95","Type":"ContainerStarted","Data":"ac346b68dfed60a098d7df0064a8d8bb951c810510bc074772a4c4df316281cd"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.282455 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" event={"ID":"682ed001-72d5-49dd-80bc-a8bb65323efd","Type":"ContainerStarted","Data":"8113e0af955a8a8d2d097ff16402f34ec4b186c1744be599d47636f626865315"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.282476 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.282758 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.291603 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.309275 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" event={"ID":"f1cd991b-8078-45cb-9591-ae3f5a4d4db4","Type":"ContainerStarted","Data":"9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.311284 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.311318 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.309337 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" event={"ID":"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b","Type":"ContainerStarted","Data":"98ebb2063b4858c2d3a68ea585192a300a98cf17a19c579a38a65e073982b87f"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.312016 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.332786 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.352634 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.360429 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.360629 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.375355 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.377955 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.383530 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.390608 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.413743 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.439988 5117 generic.go:358] "Generic (PLEG): container finished" podID="fbfdc6c4-be51-4e2c-8ed3-44424ccde813" containerID="34fd5f1a5975ee030998e5c8bd1cb7263c9091dad21974604387d03709de557b" exitCode=0 Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.451179 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.456178 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.456837 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.456890 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.470186 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.475765 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e148c5fe-c209-4e41-82bb-aa78a79c0d66-service-ca-bundle\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.475809 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw89x\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-kube-api-access-lw89x\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.475845 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-registry-tls\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.475863 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-registry-certificates\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.475881 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-trusted-ca\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476118 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-default-certificate\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476244 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e140562-67a0-4a82-bfab-c678258c734e-ca-trust-extracted\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476283 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-stats-auth\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476311 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-metrics-certs\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476377 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476481 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8p4p\" (UniqueName: \"kubernetes.io/projected/e148c5fe-c209-4e41-82bb-aa78a79c0d66-kube-api-access-t8p4p\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476590 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-bound-sa-token\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.476649 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e140562-67a0-4a82-bfab-c678258c734e-installation-pull-secrets\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: E0130 00:12:21.476746 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.976726393 +0000 UTC m=+105.088262283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.482894 5117 generic.go:358] "Generic (PLEG): container finished" podID="8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95" containerID="e50816d970a20dd97aba6bdc3d3c421eea525b8577ca45e438690c60c3403705" exitCode=0 Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.492279 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.510910 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.512199 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.531323 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.551316 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.568024 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-f65lp"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.568356 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578227 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578525 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-default-certificate\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578567 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-tmp-dir\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578592 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578615 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xnhs\" (UniqueName: \"kubernetes.io/projected/d3fceb33-fc7b-410d-bb5f-2332207d4d62-kube-api-access-8xnhs\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578643 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e140562-67a0-4a82-bfab-c678258c734e-ca-trust-extracted\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578661 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b759bca6-26d8-4e5b-8401-00a6be292d4d-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-xkn89\" (UID: \"b759bca6-26d8-4e5b-8401-00a6be292d4d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578678 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-stats-auth\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.578701 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-metrics-certs\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579461 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579494 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46da46fd-f439-48fe-88ef-5cfeb085e371-config\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579521 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t8p4p\" (UniqueName: \"kubernetes.io/projected/e148c5fe-c209-4e41-82bb-aa78a79c0d66-kube-api-access-t8p4p\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: E0130 00:12:21.579623 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.079600857 +0000 UTC m=+105.191136747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579673 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpgcw\" (UniqueName: \"kubernetes.io/projected/f040142e-c8d1-4bcc-87e7-f96ed272260f-kube-api-access-kpgcw\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579766 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-bound-sa-token\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579826 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579848 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46da46fd-f439-48fe-88ef-5cfeb085e371-serving-cert\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.579937 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11e6a64e-9963-4871-9f58-956f659aec4a-secret-volume\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580009 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e140562-67a0-4a82-bfab-c678258c734e-installation-pull-secrets\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580059 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fceb33-fc7b-410d-bb5f-2332207d4d62-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580113 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkds\" (UniqueName: \"kubernetes.io/projected/7370f172-a96c-42c9-971b-76b5ef52303e-kube-api-access-fgkds\") pod \"image-pruner-29495520-ngpdz\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580142 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpsp\" (UniqueName: \"kubernetes.io/projected/11e6a64e-9963-4871-9f58-956f659aec4a-kube-api-access-rlpsp\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580188 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7370f172-a96c-42c9-971b-76b5ef52303e-serviceca\") pod \"image-pruner-29495520-ngpdz\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580220 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqvnb\" (UniqueName: \"kubernetes.io/projected/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-kube-api-access-mqvnb\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580248 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-serving-cert\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580277 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-config\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580313 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzrs\" (UniqueName: \"kubernetes.io/projected/44b89803-3ace-4031-9267-19e85991373e-kube-api-access-jnzrs\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580334 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-kube-api-access\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580359 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44b89803-3ace-4031-9267-19e85991373e-srv-cert\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580433 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f040142e-c8d1-4bcc-87e7-f96ed272260f-apiservice-cert\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580463 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/b759bca6-26d8-4e5b-8401-00a6be292d4d-kube-api-access-f48kh\") pod \"package-server-manager-77f986bd66-xkn89\" (UID: \"b759bca6-26d8-4e5b-8401-00a6be292d4d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580483 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f040142e-c8d1-4bcc-87e7-f96ed272260f-webhook-cert\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580561 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e148c5fe-c209-4e41-82bb-aa78a79c0d66-service-ca-bundle\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580720 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/44b89803-3ace-4031-9267-19e85991373e-tmpfs\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580749 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8vjs\" (UniqueName: \"kubernetes.io/projected/16dc898b-ab99-4df1-a84e-a3d57d7ccd84-kube-api-access-c8vjs\") pod \"migrator-866fcbc849-l64ns\" (UID: \"16dc898b-ab99-4df1-a84e-a3d57d7ccd84\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580766 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdl6\" (UniqueName: \"kubernetes.io/projected/abfbd0d0-cfec-4caf-aa18-2d2fb1beb091-kube-api-access-vvdl6\") pod \"multus-admission-controller-69db94689b-nlhql\" (UID: \"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091\") " pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580780 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f040142e-c8d1-4bcc-87e7-f96ed272260f-tmpfs\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580797 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44b89803-3ace-4031-9267-19e85991373e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580814 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lw89x\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-kube-api-access-lw89x\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580838 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-registry-tls\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580858 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abfbd0d0-cfec-4caf-aa18-2d2fb1beb091-webhook-certs\") pod \"multus-admission-controller-69db94689b-nlhql\" (UID: \"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091\") " pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580875 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wt9z\" (UniqueName: \"kubernetes.io/projected/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-kube-api-access-7wt9z\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.580960 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e140562-67a0-4a82-bfab-c678258c734e-ca-trust-extracted\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581040 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-registry-certificates\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581263 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-images\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581287 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-trusted-ca\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581305 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581320 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsphw\" (UniqueName: \"kubernetes.io/projected/46da46fd-f439-48fe-88ef-5cfeb085e371-kube-api-access-rsphw\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581355 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fceb33-fc7b-410d-bb5f-2332207d4d62-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581372 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11e6a64e-9963-4871-9f58-956f659aec4a-config-volume\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.581846 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e148c5fe-c209-4e41-82bb-aa78a79c0d66-service-ca-bundle\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.585743 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-default-certificate\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.587203 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-metrics-certs\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.587208 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e148c5fe-c209-4e41-82bb-aa78a79c0d66-stats-auth\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.591209 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.612006 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.615364 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e140562-67a0-4a82-bfab-c678258c734e-installation-pull-secrets\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.615815 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-trusted-ca\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.617082 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-registry-certificates\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.621348 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-registry-tls\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.634651 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.656043 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.671357 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682182 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-images\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682220 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jh75\" (UniqueName: \"kubernetes.io/projected/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-kube-api-access-9jh75\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682241 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682478 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsphw\" (UniqueName: \"kubernetes.io/projected/46da46fd-f439-48fe-88ef-5cfeb085e371-kube-api-access-rsphw\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682590 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fceb33-fc7b-410d-bb5f-2332207d4d62-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682623 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11e6a64e-9963-4871-9f58-956f659aec4a-config-volume\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682683 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-tmp-dir\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682735 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-tmpfs\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682778 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682804 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xnhs\" (UniqueName: \"kubernetes.io/projected/d3fceb33-fc7b-410d-bb5f-2332207d4d62-kube-api-access-8xnhs\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b759bca6-26d8-4e5b-8401-00a6be292d4d-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-xkn89\" (UID: \"b759bca6-26d8-4e5b-8401-00a6be292d4d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682878 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682910 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46da46fd-f439-48fe-88ef-5cfeb085e371-config\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682944 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-images\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682945 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683040 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kpgcw\" (UniqueName: \"kubernetes.io/projected/f040142e-c8d1-4bcc-87e7-f96ed272260f-kube-api-access-kpgcw\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683073 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683089 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46da46fd-f439-48fe-88ef-5cfeb085e371-serving-cert\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683124 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11e6a64e-9963-4871-9f58-956f659aec4a-secret-volume\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683166 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fceb33-fc7b-410d-bb5f-2332207d4d62-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683213 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkds\" (UniqueName: \"kubernetes.io/projected/7370f172-a96c-42c9-971b-76b5ef52303e-kube-api-access-fgkds\") pod \"image-pruner-29495520-ngpdz\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683230 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rlpsp\" (UniqueName: \"kubernetes.io/projected/11e6a64e-9963-4871-9f58-956f659aec4a-kube-api-access-rlpsp\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: E0130 00:12:21.683269 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.183253972 +0000 UTC m=+105.294789862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.682909 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683859 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7370f172-a96c-42c9-971b-76b5ef52303e-serviceca\") pod \"image-pruner-29495520-ngpdz\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683882 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fceb33-fc7b-410d-bb5f-2332207d4d62-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683893 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mqvnb\" (UniqueName: \"kubernetes.io/projected/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-kube-api-access-mqvnb\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683928 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-serving-cert\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.683968 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-srv-cert\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.684014 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-config\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.684046 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jnzrs\" (UniqueName: \"kubernetes.io/projected/44b89803-3ace-4031-9267-19e85991373e-kube-api-access-jnzrs\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.684078 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-kube-api-access\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.684096 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44b89803-3ace-4031-9267-19e85991373e-srv-cert\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.684119 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f040142e-c8d1-4bcc-87e7-f96ed272260f-apiservice-cert\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.684933 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46da46fd-f439-48fe-88ef-5cfeb085e371-config\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685030 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/b759bca6-26d8-4e5b-8401-00a6be292d4d-kube-api-access-f48kh\") pod \"package-server-manager-77f986bd66-xkn89\" (UID: \"b759bca6-26d8-4e5b-8401-00a6be292d4d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685031 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-tmp-dir\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685093 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f040142e-c8d1-4bcc-87e7-f96ed272260f-webhook-cert\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685825 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/44b89803-3ace-4031-9267-19e85991373e-tmpfs\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685884 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685910 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8vjs\" (UniqueName: \"kubernetes.io/projected/16dc898b-ab99-4df1-a84e-a3d57d7ccd84-kube-api-access-c8vjs\") pod \"migrator-866fcbc849-l64ns\" (UID: \"16dc898b-ab99-4df1-a84e-a3d57d7ccd84\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685937 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdl6\" (UniqueName: \"kubernetes.io/projected/abfbd0d0-cfec-4caf-aa18-2d2fb1beb091-kube-api-access-vvdl6\") pod \"multus-admission-controller-69db94689b-nlhql\" (UID: \"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091\") " pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685975 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f040142e-c8d1-4bcc-87e7-f96ed272260f-tmpfs\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.685996 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44b89803-3ace-4031-9267-19e85991373e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.686146 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abfbd0d0-cfec-4caf-aa18-2d2fb1beb091-webhook-certs\") pod \"multus-admission-controller-69db94689b-nlhql\" (UID: \"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091\") " pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.686173 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wt9z\" (UniqueName: \"kubernetes.io/projected/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-kube-api-access-7wt9z\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.686297 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7370f172-a96c-42c9-971b-76b5ef52303e-serviceca\") pod \"image-pruner-29495520-ngpdz\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.686330 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/44b89803-3ace-4031-9267-19e85991373e-tmpfs\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.686592 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.686790 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f040142e-c8d1-4bcc-87e7-f96ed272260f-tmpfs\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.688136 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11e6a64e-9963-4871-9f58-956f659aec4a-config-volume\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.688643 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46da46fd-f439-48fe-88ef-5cfeb085e371-serving-cert\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.690995 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.691863 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11e6a64e-9963-4871-9f58-956f659aec4a-secret-volume\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.691956 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.693127 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.693469 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44b89803-3ace-4031-9267-19e85991373e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.693542 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fceb33-fc7b-410d-bb5f-2332207d4d62-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.694171 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44b89803-3ace-4031-9267-19e85991373e-srv-cert\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.695140 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/abfbd0d0-cfec-4caf-aa18-2d2fb1beb091-webhook-certs\") pod \"multus-admission-controller-69db94689b-nlhql\" (UID: \"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091\") " pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.716929 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.731552 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.739081 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-serving-cert\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.747236 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" event={"ID":"fbfdc6c4-be51-4e2c-8ed3-44424ccde813","Type":"ContainerStarted","Data":"1320d847c5685f0c181a5b17f5d95bfe623e1ba845f4c13d00e88713a627ee6e"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.747296 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-8r5cz"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.747448 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.751074 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.759039 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-config\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.769987 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: W0130 00:12:21.781638 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda84db473_48ab_4f4b_a46c_a62c4db95393.slice/crio-4442ebf39b58c268c8ff8376033c44d26c862b5b980166e285d9667db7d9f5e0 WatchSource:0}: Error finding container 4442ebf39b58c268c8ff8376033c44d26c862b5b980166e285d9667db7d9f5e0: Status 404 returned error can't find the container with id 4442ebf39b58c268c8ff8376033c44d26c862b5b980166e285d9667db7d9f5e0 Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788198 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788382 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-tmpfs\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788420 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92f91bd9-b566-4246-9ac7-9a591ec358b9-tmp\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788504 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788577 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-srv-cert\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788603 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788665 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788689 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bvfp\" (UniqueName: \"kubernetes.io/projected/92f91bd9-b566-4246-9ac7-9a591ec358b9-kube-api-access-7bvfp\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.788755 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9jh75\" (UniqueName: \"kubernetes.io/projected/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-kube-api-access-9jh75\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: E0130 00:12:21.789042 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.289015257 +0000 UTC m=+105.400551147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.790242 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-tmpfs\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.790685 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.794178 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802194 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802384 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802461 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g7kqs"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802539 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" event={"ID":"fbfdc6c4-be51-4e2c-8ed3-44424ccde813","Type":"ContainerDied","Data":"34fd5f1a5975ee030998e5c8bd1cb7263c9091dad21974604387d03709de557b"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802794 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802903 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" event={"ID":"fc1146e5-d235-43a2-af92-33464c191179","Type":"ContainerStarted","Data":"90260d9948e35c9c3efd1ff1de578b85766596f9c898c43569aa476cc25f533c"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802993 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.802556 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.803063 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"313317a70886810c7f5020cdd90657df54efb1419b7734cbde531d2d45b7477a"} Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.803216 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-85jpm"] Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.810692 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.830281 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.837567 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b759bca6-26d8-4e5b-8401-00a6be292d4d-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-xkn89\" (UID: \"b759bca6-26d8-4e5b-8401-00a6be292d4d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.855144 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.862126 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f040142e-c8d1-4bcc-87e7-f96ed272260f-apiservice-cert\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.864285 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f040142e-c8d1-4bcc-87e7-f96ed272260f-webhook-cert\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.870659 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.884978 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-srv-cert\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890364 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/55051c39-0e72-4600-aa52-65bf35260f75-signing-key\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890400 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bvfp\" (UniqueName: \"kubernetes.io/projected/92f91bd9-b566-4246-9ac7-9a591ec358b9-kube-api-access-7bvfp\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890442 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/55051c39-0e72-4600-aa52-65bf35260f75-signing-cabundle\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890484 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92f91bd9-b566-4246-9ac7-9a591ec358b9-tmp\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890510 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdzp\" (UniqueName: \"kubernetes.io/projected/55051c39-0e72-4600-aa52-65bf35260f75-kube-api-access-2wdzp\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890534 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890580 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.890611 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: E0130 00:12:21.891170 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.39115449 +0000 UTC m=+105.502690380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.891311 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92f91bd9-b566-4246-9ac7-9a591ec358b9-tmp\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.926614 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8p4p\" (UniqueName: \"kubernetes.io/projected/e148c5fe-c209-4e41-82bb-aa78a79c0d66-kube-api-access-t8p4p\") pod \"router-default-68cf44c8b8-2rttq\" (UID: \"e148c5fe-c209-4e41-82bb-aa78a79c0d66\") " pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.945530 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-bound-sa-token\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.966502 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw89x\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-kube-api-access-lw89x\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.990306 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsphw\" (UniqueName: \"kubernetes.io/projected/46da46fd-f439-48fe-88ef-5cfeb085e371-kube-api-access-rsphw\") pod \"service-ca-operator-5b9c976747-nk8lc\" (UID: \"46da46fd-f439-48fe-88ef-5cfeb085e371\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.991335 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5117]: E0130 00:12:21.991679 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.491640736 +0000 UTC m=+105.603176616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.991798 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/55051c39-0e72-4600-aa52-65bf35260f75-signing-cabundle\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.991918 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdzp\" (UniqueName: \"kubernetes.io/projected/55051c39-0e72-4600-aa52-65bf35260f75-kube-api-access-2wdzp\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.992258 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:21 crc kubenswrapper[5117]: I0130 00:12:21.992477 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/55051c39-0e72-4600-aa52-65bf35260f75-signing-key\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:21 crc kubenswrapper[5117]: E0130 00:12:21.992870 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.492862861 +0000 UTC m=+105.604398751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.007423 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xnhs\" (UniqueName: \"kubernetes.io/projected/d3fceb33-fc7b-410d-bb5f-2332207d4d62-kube-api-access-8xnhs\") pod \"kube-storage-version-migrator-operator-565b79b866-4mqgt\" (UID: \"d3fceb33-fc7b-410d-bb5f-2332207d4d62\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.034577 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpgcw\" (UniqueName: \"kubernetes.io/projected/f040142e-c8d1-4bcc-87e7-f96ed272260f-kube-api-access-kpgcw\") pod \"packageserver-7d4fc7d867-fw9pw\" (UID: \"f040142e-c8d1-4bcc-87e7-f96ed272260f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.044591 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkds\" (UniqueName: \"kubernetes.io/projected/7370f172-a96c-42c9-971b-76b5ef52303e-kube-api-access-fgkds\") pod \"image-pruner-29495520-ngpdz\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.067981 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqvnb\" (UniqueName: \"kubernetes.io/projected/88bd31dd-a6a3-4f38-8459-0d1be720d2ba-kube-api-access-mqvnb\") pod \"machine-config-controller-f9cdd68f7-wtlqb\" (UID: \"88bd31dd-a6a3-4f38-8459-0d1be720d2ba\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.084089 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlpsp\" (UniqueName: \"kubernetes.io/projected/11e6a64e-9963-4871-9f58-956f659aec4a-kube-api-access-rlpsp\") pod \"collect-profiles-29495520-9nb7k\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.084392 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.098098 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.098417 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.598381949 +0000 UTC m=+105.709917829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.105010 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.115960 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnzrs\" (UniqueName: \"kubernetes.io/projected/44b89803-3ace-4031-9267-19e85991373e-kube-api-access-jnzrs\") pod \"catalog-operator-75ff9f647d-vxvgr\" (UID: \"44b89803-3ace-4031-9267-19e85991373e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.117958 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.120317 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.124948 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.125383 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11c8ac2-e86d-43b9-8985-ecfe6fb305ba-kube-api-access\") pod \"kube-apiserver-operator-575994946d-v7kvx\" (UID: \"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.148469 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48kh\" (UniqueName: \"kubernetes.io/projected/b759bca6-26d8-4e5b-8401-00a6be292d4d-kube-api-access-f48kh\") pod \"package-server-manager-77f986bd66-xkn89\" (UID: \"b759bca6-26d8-4e5b-8401-00a6be292d4d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.156814 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.170605 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdl6\" (UniqueName: \"kubernetes.io/projected/abfbd0d0-cfec-4caf-aa18-2d2fb1beb091-kube-api-access-vvdl6\") pod \"multus-admission-controller-69db94689b-nlhql\" (UID: \"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091\") " pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.191207 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.192259 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8vjs\" (UniqueName: \"kubernetes.io/projected/16dc898b-ab99-4df1-a84e-a3d57d7ccd84-kube-api-access-c8vjs\") pod \"migrator-866fcbc849-l64ns\" (UID: \"16dc898b-ab99-4df1-a84e-a3d57d7ccd84\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.196224 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.199663 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.200519 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.700497991 +0000 UTC m=+105.812033881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.212123 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.220394 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wt9z\" (UniqueName: \"kubernetes.io/projected/ffa6bede-24d1-4bc2-8b82-b7ebc48028b9-kube-api-access-7wt9z\") pod \"machine-config-operator-67c9d58cbb-mb68g\" (UID: \"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.233430 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.250766 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.264276 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.275539 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.289028 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.290640 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.301216 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.301534 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.801515142 +0000 UTC m=+105.913051032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.304568 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.327615 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jh75\" (UniqueName: \"kubernetes.io/projected/b565a4ec-53a5-4d82-bc5a-3f216a85bcfa-kube-api-access-9jh75\") pod \"olm-operator-5cdf44d969-8x2r4\" (UID: \"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.330651 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.336097 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/55051c39-0e72-4600-aa52-65bf35260f75-signing-key\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.355879 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.363602 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/55051c39-0e72-4600-aa52-65bf35260f75-signing-cabundle\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.370695 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.393610 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.393925 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.394741 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.402755 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.403296 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.903277544 +0000 UTC m=+106.014813444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.414393 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.454427 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bvfp\" (UniqueName: \"kubernetes.io/projected/92f91bd9-b566-4246-9ac7-9a591ec358b9-kube-api-access-7bvfp\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.456866 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.471537 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-f65lp\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.487226 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdzp\" (UniqueName: \"kubernetes.io/projected/55051c39-0e72-4600-aa52-65bf35260f75-kube-api-access-2wdzp\") pod \"service-ca-74545575db-8r5cz\" (UID: \"55051c39-0e72-4600-aa52-65bf35260f75\") " pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.504005 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.505090 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.004288185 +0000 UTC m=+106.115824075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: W0130 00:12:22.507988 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e6a64e_9963_4871_9f58_956f659aec4a.slice/crio-a01fffac5f3abd4d159908b27bb52273d95287dad1bde8b649dc888f00d35da0 WatchSource:0}: Error finding container a01fffac5f3abd4d159908b27bb52273d95287dad1bde8b649dc888f00d35da0: Status 404 returned error can't find the container with id a01fffac5f3abd4d159908b27bb52273d95287dad1bde8b649dc888f00d35da0 Jan 30 00:12:22 crc kubenswrapper[5117]: W0130 00:12:22.514236 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7370f172_a96c_42c9_971b_76b5ef52303e.slice/crio-6a3a5eeb368f8c8a938467f80169671dfb5efef26f8dc52d707569e3df677f75 WatchSource:0}: Error finding container 6a3a5eeb368f8c8a938467f80169671dfb5efef26f8dc52d707569e3df677f75: Status 404 returned error can't find the container with id 6a3a5eeb368f8c8a938467f80169671dfb5efef26f8dc52d707569e3df677f75 Jan 30 00:12:22 crc kubenswrapper[5117]: W0130 00:12:22.518618 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3fceb33_fc7b_410d_bb5f_2332207d4d62.slice/crio-4080aa1a0ec3ba4098665424e3c1704a2924868279d3702bea0dccc299d5435d WatchSource:0}: Error finding container 4080aa1a0ec3ba4098665424e3c1704a2924868279d3702bea0dccc299d5435d: Status 404 returned error can't find the container with id 4080aa1a0ec3ba4098665424e3c1704a2924868279d3702bea0dccc299d5435d Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.552301 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.561627 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.569932 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-8r5cz" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.607473 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.607855 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.107840007 +0000 UTC m=+106.219375897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.708453 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.708930 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.20890799 +0000 UTC m=+106.320443890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770656 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" event={"ID":"609de4f2-7b79-438a-b5c5-a2650396bc23","Type":"ContainerStarted","Data":"cd2e895fc8ab04b30668058c1b6f0804826ee3836d83ed2df7fb87c0ec3da739"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770744 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4a5f1dca346eea185bdfa6bfeda2f469978bfb043f98a80db803337c4f4207e7"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770794 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-scnb9"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770825 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" event={"ID":"f1cd991b-8078-45cb-9591-ae3f5a4d4db4","Type":"ContainerStarted","Data":"632bd71ed6200d8c3c063f866e29264eed700687cda02c2f2944bda4f747ede5"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770841 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770852 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" event={"ID":"d11df2d2-8553-4697-8bcf-9a96d37bcc06","Type":"ContainerStarted","Data":"9b21a44ef06d7238eca90c843ad7da424deb74f92da567de34c17684e7a973bd"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770872 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-dvncc"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770882 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" event={"ID":"682ed001-72d5-49dd-80bc-a8bb65323efd","Type":"ContainerStarted","Data":"0e1a6f561ff9debf85552cf971f9682ab248734b8d0591cd8d070c6f6175104b"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770894 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mq4qt" event={"ID":"e161fe62-f260-4253-a91c-00d71e12cd51","Type":"ContainerStarted","Data":"ff9cecd9054154d3f7a7f2a831883bdecc8dfeae88ff25f0cb3483a77ce16f28"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770905 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" event={"ID":"20399794-4fdc-4e83-ac69-2b65f2a3bb2c","Type":"ContainerStarted","Data":"74c66648ca9e3904d86343f49834a5c554ff6d1d46271567ca6252f1599c56d7"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770911 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.770918 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771452 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771466 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-nkcjt"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771477 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" event={"ID":"bc268a8d-137f-49eb-bb96-b696fdf66ccc","Type":"ContainerStarted","Data":"66c37b49f99f058fda56d95e625f3097adc91e5100949ffdd7556dbcf198c793"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771500 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" event={"ID":"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d","Type":"ContainerStarted","Data":"9a53ddfbcbfa19147f15fa8586c11f20ba90fd096a195f32ffceb1174461af1d"} Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771517 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771529 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771540 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771554 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771564 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.771574 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9jt6p"] Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.791416 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.812502 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.812942 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.312921226 +0000 UTC m=+106.424457116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.819987 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.835072 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.863093 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:12:22 crc kubenswrapper[5117]: W0130 00:12:22.894663 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46da46fd_f439_48fe_88ef_5cfeb085e371.slice/crio-8db64c881b1366abdf8f79e0f5d8fffa4a4bb1cfaf4b4e1dc62b009468265990 WatchSource:0}: Error finding container 8db64c881b1366abdf8f79e0f5d8fffa4a4bb1cfaf4b4e1dc62b009468265990: Status 404 returned error can't find the container with id 8db64c881b1366abdf8f79e0f5d8fffa4a4bb1cfaf4b4e1dc62b009468265990 Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.916376 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.916603 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3-cert\") pod \"ingress-canary-85jpm\" (UID: \"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3\") " pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.916753 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclbb\" (UniqueName: \"kubernetes.io/projected/484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3-kube-api-access-rclbb\") pod \"ingress-canary-85jpm\" (UID: \"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3\") " pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:22 crc kubenswrapper[5117]: E0130 00:12:22.917901 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.417867617 +0000 UTC m=+106.529403507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5117]: W0130 00:12:22.953872 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb759bca6_26d8_4e5b_8401_00a6be292d4d.slice/crio-50c9921e8534d428b8cddc2e9af670c15f63825068f9d979f60989e9274915d4 WatchSource:0}: Error finding container 50c9921e8534d428b8cddc2e9af670c15f63825068f9d979f60989e9274915d4: Status 404 returned error can't find the container with id 50c9921e8534d428b8cddc2e9af670c15f63825068f9d979f60989e9274915d4 Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.969868 5117 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-cgnvb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:12:22 crc kubenswrapper[5117]: I0130 00:12:22.969960 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.021247 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.021729 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.521684487 +0000 UTC m=+106.633220377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.022992 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rclbb\" (UniqueName: \"kubernetes.io/projected/484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3-kube-api-access-rclbb\") pod \"ingress-canary-85jpm\" (UID: \"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3\") " pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.023147 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3-cert\") pod \"ingress-canary-85jpm\" (UID: \"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3\") " pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:23 crc kubenswrapper[5117]: W0130 00:12:23.032267 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf040142e_c8d1_4bcc_87e7_f96ed272260f.slice/crio-3afe20db3475ba617b5d9520b4bdbacd54af687debcb5d3fb7f6ef481c3e0d5d WatchSource:0}: Error finding container 3afe20db3475ba617b5d9520b4bdbacd54af687debcb5d3fb7f6ef481c3e0d5d: Status 404 returned error can't find the container with id 3afe20db3475ba617b5d9520b4bdbacd54af687debcb5d3fb7f6ef481c3e0d5d Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.032728 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3-cert\") pod \"ingress-canary-85jpm\" (UID: \"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3\") " pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.035685 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.035909 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.035928 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-nlhql"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.035975 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-mphjp"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.036004 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.036018 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" event={"ID":"eb191c78-b1b1-4b69-b609-210416eb3356","Type":"ContainerStarted","Data":"6e653134294a876171722b23a25dca9f7839fa891b824b3b44f5a10bade30a4c"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.036075 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" event={"ID":"235bb0bc-4887-4dfc-8a63-4f919855ef2c","Type":"ContainerStarted","Data":"2d9f21ad2623ad718408350e96f36abcb13271e6eb0a34dd3dd1d519bc0ed3e1"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.036091 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" event={"ID":"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95","Type":"ContainerDied","Data":"e50816d970a20dd97aba6bdc3d3c421eea525b8577ca45e438690c60c3403705"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.036107 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-dvncc" event={"ID":"3c09a221-05c5-4aa7-a59f-7501885dd323","Type":"ContainerStarted","Data":"d950d89c0e3a69fb3ade5c7499f2cbc17e05775757db26f7c757567b85ebd7cb"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.036125 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-b52fx"] Jan 30 00:12:23 crc kubenswrapper[5117]: W0130 00:12:23.038006 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55051c39_0e72_4600_aa52_65bf35260f75.slice/crio-dfa9f2a430344e9c67f000527d562b8fc5f05a770ad03edb18e8fe763a217c0e WatchSource:0}: Error finding container dfa9f2a430344e9c67f000527d562b8fc5f05a770ad03edb18e8fe763a217c0e: Status 404 returned error can't find the container with id dfa9f2a430344e9c67f000527d562b8fc5f05a770ad03edb18e8fe763a217c0e Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051466 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q7tcw" event={"ID":"a09afae3-bd41-4f19-af49-34689367f229","Type":"ContainerStarted","Data":"9e22561c01234d5e170fe12c9fc2b32c62d663e37715578c6a8fa9994ab83294"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051567 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgbnh"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051584 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ndwrw"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051623 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051636 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" event={"ID":"9c88bbcd-89bb-4c99-86aa-bb81f78a4a4b","Type":"ContainerStarted","Data":"088cd56a5db271440a4900098949069337e3286f342be8c1a9b2d426415d34f6"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051647 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051663 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-mmnjm"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051743 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051760 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-mq4qt"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051772 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" event={"ID":"e148c5fe-c209-4e41-82bb-aa78a79c0d66","Type":"ContainerStarted","Data":"12639855c37a974d32a53a5a13949cfba02bf273bff4c23fc429be7dd1268082"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051783 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" event={"ID":"be4ec378-78db-4de0-ae65-691720b18b85","Type":"ContainerStarted","Data":"263596812b491e4c568ff786073a9dcbf631e845fe0491fbd689187dba1a489f"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051817 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051833 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051851 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" event={"ID":"a84db473-48ab-4f4b-a46c-a62c4db95393","Type":"ContainerStarted","Data":"4442ebf39b58c268c8ff8376033c44d26c862b5b980166e285d9667db7d9f5e0"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051866 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" event={"ID":"71716c09-9759-4c82-a34c-d20b59b0ed78","Type":"ContainerStarted","Data":"dcb5c30ab96b6b87a4cdf3c60379e0f1a6e6e45c7527552691602f6546ea5552"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051902 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" event={"ID":"3c334d22-8d3f-4478-80b3-d3f4049c533f","Type":"ContainerStarted","Data":"984ea47eb6812a917915bc7e1378e57aae7096403684dcaff5d2047586e2fbfc"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051914 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051926 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051938 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.051949 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dtfxb"] Jan 30 00:12:23 crc kubenswrapper[5117]: W0130 00:12:23.053286 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb565a4ec_53a5_4d82_bc5a_3f216a85bcfa.slice/crio-dc46aaccce4d8e5fe600322233ff5b3b622e671540416e39f9ce1bea8759c27d WatchSource:0}: Error finding container dc46aaccce4d8e5fe600322233ff5b3b622e671540416e39f9ce1bea8759c27d: Status 404 returned error can't find the container with id dc46aaccce4d8e5fe600322233ff5b3b622e671540416e39f9ce1bea8759c27d Jan 30 00:12:23 crc kubenswrapper[5117]: W0130 00:12:23.058641 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabfbd0d0_cfec_4caf_aa18_2d2fb1beb091.slice/crio-f58dcaaac51f0a204da40d18064edb278a6f212b61a94eb401b83143ab82bb68 WatchSource:0}: Error finding container f58dcaaac51f0a204da40d18064edb278a6f212b61a94eb401b83143ab82bb68: Status 404 returned error can't find the container with id f58dcaaac51f0a204da40d18064edb278a6f212b61a94eb401b83143ab82bb68 Jan 30 00:12:23 crc kubenswrapper[5117]: W0130 00:12:23.066644 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16dc898b_ab99_4df1_a84e_a3d57d7ccd84.slice/crio-da89e81603f6bf84087003e71ef1a0cbf72ea2f56816b3252e4d590b9e16de8c WatchSource:0}: Error finding container da89e81603f6bf84087003e71ef1a0cbf72ea2f56816b3252e4d590b9e16de8c: Status 404 returned error can't find the container with id da89e81603f6bf84087003e71ef1a0cbf72ea2f56816b3252e4d590b9e16de8c Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.072912 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rclbb\" (UniqueName: \"kubernetes.io/projected/484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3-kube-api-access-rclbb\") pod \"ingress-canary-85jpm\" (UID: \"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3\") " pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.091424 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.099776 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-85jpm" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.114408 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.124392 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.124693 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5d7d859-970a-4c9e-ba49-e2fb1facde62-node-bootstrap-token\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.124831 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndwfb\" (UniqueName: \"kubernetes.io/projected/c5d7d859-970a-4c9e-ba49-e2fb1facde62-kube-api-access-ndwfb\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.124916 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.624888759 +0000 UTC m=+106.736424649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.125000 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5d7d859-970a-4c9e-ba49-e2fb1facde62-certs\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.125053 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.125411 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.625390664 +0000 UTC m=+106.736926554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.130461 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.226617 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.226757 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5d7d859-970a-4c9e-ba49-e2fb1facde62-node-bootstrap-token\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.226796 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndwfb\" (UniqueName: \"kubernetes.io/projected/c5d7d859-970a-4c9e-ba49-e2fb1facde62-kube-api-access-ndwfb\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.226887 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.726842467 +0000 UTC m=+106.838378367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.227087 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5d7d859-970a-4c9e-ba49-e2fb1facde62-certs\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.227132 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.227663 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.72765345 +0000 UTC m=+106.839189350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.232564 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5d7d859-970a-4c9e-ba49-e2fb1facde62-certs\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.233234 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5d7d859-970a-4c9e-ba49-e2fb1facde62-node-bootstrap-token\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.264560 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndwfb\" (UniqueName: \"kubernetes.io/projected/c5d7d859-970a-4c9e-ba49-e2fb1facde62-kube-api-access-ndwfb\") pod \"machine-config-server-9jt6p\" (UID: \"c5d7d859-970a-4c9e-ba49-e2fb1facde62\") " pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: W0130 00:12:23.280531 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod484b7da9_d77c_4b22_a8b3_ae64ca9a1ff3.slice/crio-49efd7291a7c609f46484fcda4b7c441022d06ff7b14d5a4016ec08144727b8c WatchSource:0}: Error finding container 49efd7291a7c609f46484fcda4b7c441022d06ff7b14d5a4016ec08144727b8c: Status 404 returned error can't find the container with id 49efd7291a7c609f46484fcda4b7c441022d06ff7b14d5a4016ec08144727b8c Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.328197 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.328355 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.828315851 +0000 UTC m=+106.939851761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.328982 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.329343 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.82933313 +0000 UTC m=+106.940869040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.357453 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9jt6p" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.430037 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.430245 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.930208567 +0000 UTC m=+107.041744467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.430638 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.431184 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.931160564 +0000 UTC m=+107.042696444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.531377 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.532142 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.032098102 +0000 UTC m=+107.143634022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.587408 5117 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-g7kqs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.587489 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.633827 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.634537 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.134511183 +0000 UTC m=+107.246047073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.736946 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.737066 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.237040577 +0000 UTC m=+107.348576467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.737300 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.737719 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.237710255 +0000 UTC m=+107.349246145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.839118 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.839614 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.339582451 +0000 UTC m=+107.451118371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.941785 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:23 crc kubenswrapper[5117]: E0130 00:12:23.942298 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.442274469 +0000 UTC m=+107.553810359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975060 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975112 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975130 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q7tcw" event={"ID":"a09afae3-bd41-4f19-af49-34689367f229","Type":"ContainerStarted","Data":"9b2ba8f7e75c3581cfa4fe400930069a7888b232aaa8e858ac4251eb91eb229e"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975155 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-8r5cz"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975170 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9jt6p" event={"ID":"c5d7d859-970a-4c9e-ba49-e2fb1facde62","Type":"ContainerStarted","Data":"671ac5d56be22792ce37d2fcb8b0234970abd844d25039a0a0aba9783709659a"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975181 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975194 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-8r5cz" event={"ID":"55051c39-0e72-4600-aa52-65bf35260f75","Type":"ContainerStarted","Data":"dfa9f2a430344e9c67f000527d562b8fc5f05a770ad03edb18e8fe763a217c0e"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975205 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" event={"ID":"fc1146e5-d235-43a2-af92-33464c191179","Type":"ContainerStarted","Data":"5d247983821d18c591067b5f62f4f6e68df1a022e4f455bde126b91ea8f4c5c6"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.975488 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.976468 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.976507 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" event={"ID":"16dc898b-ab99-4df1-a84e-a3d57d7ccd84","Type":"ContainerStarted","Data":"da89e81603f6bf84087003e71ef1a0cbf72ea2f56816b3252e4d590b9e16de8c"} Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.976541 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c28gc"] Jan 30 00:12:23 crc kubenswrapper[5117]: I0130 00:12:23.982008 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.044035 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.044307 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2t98\" (UniqueName: \"kubernetes.io/projected/49962195-77dc-47ef-a7dc-e9c1631d049d-kube-api-access-m2t98\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.044391 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/49962195-77dc-47ef-a7dc-e9c1631d049d-ready\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.044413 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49962195-77dc-47ef-a7dc-e9c1631d049d-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.044454 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49962195-77dc-47ef-a7dc-e9c1631d049d-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.044587 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.544565496 +0000 UTC m=+107.656101386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.085969 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-ngpdz"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.086006 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-ngpdz" event={"ID":"7370f172-a96c-42c9-971b-76b5ef52303e","Type":"ContainerStarted","Data":"6a3a5eeb368f8c8a938467f80169671dfb5efef26f8dc52d707569e3df677f75"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.086153 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-f65lp"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.086216 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.086208 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4433b6a1e73b95c23fcc646926070c0598a237242044b5c48a31ed2384568ee2"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087375 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-85jpm" event={"ID":"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3","Type":"ContainerStarted","Data":"49efd7291a7c609f46484fcda4b7c441022d06ff7b14d5a4016ec08144727b8c"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087424 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-85jpm"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087442 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" event={"ID":"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091","Type":"ContainerStarted","Data":"f58dcaaac51f0a204da40d18064edb278a6f212b61a94eb401b83143ab82bb68"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087453 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" event={"ID":"70053511-152f-4649-a478-cbce9a4bd8e5","Type":"ContainerStarted","Data":"4663710a076d60abed79b02e3d84a7ffc4e592dcd1826e20773de2c4fb44abd6"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087465 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c28gc"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087482 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" event={"ID":"44b89803-3ace-4031-9267-19e85991373e","Type":"ContainerStarted","Data":"60887673db83b338d0f1a23fb9ba37a691bb74df50cc621d4c7d04af0ccede1e"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087493 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" event={"ID":"11e6a64e-9963-4871-9f58-956f659aec4a","Type":"ContainerStarted","Data":"a01fffac5f3abd4d159908b27bb52273d95287dad1bde8b649dc888f00d35da0"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087506 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" event={"ID":"2cf47fab-c86d-4283-b285-b4ca795bf6d6","Type":"ContainerStarted","Data":"bffa4609cdef287849b80d2ce5fa955eb891f139222b56fefe77cdb1450ec17f"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.087521 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2rpjg"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.088400 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" podStartSLOduration=83.088387003 podStartE2EDuration="1m23.088387003s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.627217424 +0000 UTC m=+105.738753334" watchObservedRunningTime="2026-01-30 00:12:24.088387003 +0000 UTC m=+107.199922893" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.089421 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" podStartSLOduration=83.089412472 podStartE2EDuration="1m23.089412472s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.669001803 +0000 UTC m=+105.780537703" watchObservedRunningTime="2026-01-30 00:12:24.089412472 +0000 UTC m=+107.200948362" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.091397 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.110626 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.131131 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.145927 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-registration-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.145985 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-plugins-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.146056 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-mountpoint-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.146099 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/49962195-77dc-47ef-a7dc-e9c1631d049d-ready\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.146121 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49962195-77dc-47ef-a7dc-e9c1631d049d-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.146143 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zr5\" (UniqueName: \"kubernetes.io/projected/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-kube-api-access-x6zr5\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.147016 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49962195-77dc-47ef-a7dc-e9c1631d049d-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.147163 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49962195-77dc-47ef-a7dc-e9c1631d049d-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.147210 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-csi-data-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.147233 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.147262 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m2t98\" (UniqueName: \"kubernetes.io/projected/49962195-77dc-47ef-a7dc-e9c1631d049d-kube-api-access-m2t98\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.147288 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/49962195-77dc-47ef-a7dc-e9c1631d049d-ready\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.147298 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-socket-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.147606 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.647590884 +0000 UTC m=+107.759126774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.148018 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49962195-77dc-47ef-a7dc-e9c1631d049d-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.186001 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.186085 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2rpjg"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187267 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" event={"ID":"682ed001-72d5-49dd-80bc-a8bb65323efd","Type":"ContainerStarted","Data":"bceeab01f804946a57a2f27a01c72e7fb51c8906c226e69726882895c406f8fc"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187294 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mq4qt" event={"ID":"e161fe62-f260-4253-a91c-00d71e12cd51","Type":"ContainerStarted","Data":"9c62a27b0b30148ce8a0d198f42684c3145e9781c894752015120616bab34c67"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187310 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" event={"ID":"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa","Type":"ContainerStarted","Data":"dc46aaccce4d8e5fe600322233ff5b3b622e671540416e39f9ce1bea8759c27d"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187321 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" event={"ID":"d3fceb33-fc7b-410d-bb5f-2332207d4d62","Type":"ContainerStarted","Data":"4080aa1a0ec3ba4098665424e3c1704a2924868279d3702bea0dccc299d5435d"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187332 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" event={"ID":"57b0c884-b5a1-4434-a0e9-b9b36cb88c3d","Type":"ContainerStarted","Data":"b39eba25b9302d4ae8f4560e9e0f8f637cec3fa777ab084a4f61e8f3e18792d7"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187342 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" event={"ID":"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba","Type":"ContainerStarted","Data":"a6f8220cce2d6e2b0f16454e9d68ddbd2e7cc28bc2018fd3409994c29fe6536c"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187358 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" event={"ID":"235bb0bc-4887-4dfc-8a63-4f919855ef2c","Type":"ContainerStarted","Data":"afc030b14c121cc66d2f245afd86d26763a1d14b45ec24c2e63c2b38da355e2c"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187371 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-dvncc" event={"ID":"3c09a221-05c5-4aa7-a59f-7501885dd323","Type":"ContainerStarted","Data":"5b479820a17fcf8ac0f77e2283f845ce1606ce82e6b599899750d6f582048468"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187380 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" event={"ID":"c273d9d0-bf2b-4efa-a942-42c772dc7f20","Type":"ContainerStarted","Data":"49caba21542616bc645bad27b50af24a8d67b8cc595dd87911fdfba6d5464037"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187392 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" event={"ID":"92f91bd9-b566-4246-9ac7-9a591ec358b9","Type":"ContainerStarted","Data":"2c6769ef815e932623bd67075ed2fca05942c9c606f1c08be05ea572cd50a9ca"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187402 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" event={"ID":"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9","Type":"ContainerStarted","Data":"46d08119713e7bd6fa0b232ee93a975044baad47779caafff042c9cf3948d4d3"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187412 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" event={"ID":"609de4f2-7b79-438a-b5c5-a2650396bc23","Type":"ContainerStarted","Data":"dfa28cddf6622e6b2395a98b47552bea4c115eb0b833be5ca892227971905d61"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187426 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" event={"ID":"b759bca6-26d8-4e5b-8401-00a6be292d4d","Type":"ContainerStarted","Data":"50c9921e8534d428b8cddc2e9af670c15f63825068f9d979f60989e9274915d4"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187438 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" event={"ID":"d11df2d2-8553-4697-8bcf-9a96d37bcc06","Type":"ContainerStarted","Data":"96627af9f27d564d884ace49bb7816a67b3b3a57be7057570a12521ffcb6d6fb"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187449 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" event={"ID":"20399794-4fdc-4e83-ac69-2b65f2a3bb2c","Type":"ContainerStarted","Data":"1ed9c8f818e29cf7465ae42dc9edd6accaf7c830a1a77cc733d0619626860d8a"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187460 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" event={"ID":"46da46fd-f439-48fe-88ef-5cfeb085e371","Type":"ContainerStarted","Data":"8db64c881b1366abdf8f79e0f5d8fffa4a4bb1cfaf4b4e1dc62b009468265990"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187472 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" event={"ID":"f040142e-c8d1-4bcc-87e7-f96ed272260f","Type":"ContainerStarted","Data":"3afe20db3475ba617b5d9520b4bdbacd54af687debcb5d3fb7f6ef481c3e0d5d"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187499 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" event={"ID":"88bd31dd-a6a3-4f38-8459-0d1be720d2ba","Type":"ContainerStarted","Data":"b80b0ebfb70041d044880dd71e3ec8754bab9c2379cb956e97f16d703a6adcda"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187510 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" event={"ID":"bc268a8d-137f-49eb-bb96-b696fdf66ccc","Type":"ContainerStarted","Data":"19044fe6bc8328881d6bea81a7f68ea45bf68135e81bf02c5d49029ca5ba1f7d"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187540 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-scnb9"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187611 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187621 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-b52fx"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187629 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187648 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g7kqs"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187661 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187669 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgbnh"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187678 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187730 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-mmnjm"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187739 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q7tcw"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187748 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187758 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187766 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187774 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-dvncc"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187784 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-nkcjt"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187792 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-mq4qt"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187800 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187811 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187819 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187829 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187838 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-mphjp"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187846 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187857 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-ngpdz"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187865 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187872 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187888 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.187900 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.189739 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2t98\" (UniqueName: \"kubernetes.io/projected/49962195-77dc-47ef-a7dc-e9c1631d049d-kube-api-access-m2t98\") pod \"cni-sysctl-allowlist-ds-dtfxb\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.190572 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.191670 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.192965 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.195806 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-f65lp"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.197233 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.205002 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.207853 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-8r5cz"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.209128 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.210043 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.212266 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.214364 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-nlhql"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.216470 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.226117 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-85jpm"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.230198 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.247945 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.248484 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cffb688a-2382-4a7a-85e8-f93ecbb27a02-metrics-tls\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.248628 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-mountpoint-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.248758 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cffb688a-2382-4a7a-85e8-f93ecbb27a02-tmp-dir\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.248925 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6zr5\" (UniqueName: \"kubernetes.io/projected/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-kube-api-access-x6zr5\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249086 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-csi-data-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249160 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-socket-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249190 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-registration-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249226 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-mountpoint-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249260 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmx9p\" (UniqueName: \"kubernetes.io/projected/cffb688a-2382-4a7a-85e8-f93ecbb27a02-kube-api-access-vmx9p\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249426 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-csi-data-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249356 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-plugins-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249500 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cffb688a-2382-4a7a-85e8-f93ecbb27a02-config-volume\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.249548 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.749519261 +0000 UTC m=+107.861055241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249792 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-registration-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249795 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-plugins-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.249816 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-socket-dir\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.256504 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m5l72" podStartSLOduration=83.256480257 podStartE2EDuration="1m23.256480257s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:23.799959542 +0000 UTC m=+106.911495452" watchObservedRunningTime="2026-01-30 00:12:24.256480257 +0000 UTC m=+107.368016157" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.275767 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-pvm2r" podStartSLOduration=83.275735201 podStartE2EDuration="1m23.275735201s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:24.018867671 +0000 UTC m=+107.130403571" watchObservedRunningTime="2026-01-30 00:12:24.275735201 +0000 UTC m=+107.387271101" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.289390 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6zr5\" (UniqueName: \"kubernetes.io/projected/4f4cf379-6e53-4fc8-8527-4e80b9aaccbe-kube-api-access-x6zr5\") pod \"csi-hostpathplugin-c28gc\" (UID: \"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe\") " pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.293347 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:24 crc kubenswrapper[5117]: W0130 00:12:24.310243 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49962195_77dc_47ef_a7dc_e9c1631d049d.slice/crio-6c7bd7ba3d58d217aa8759156340e741dbdb9a1588ebd175833a393b2ff3c57e WatchSource:0}: Error finding container 6c7bd7ba3d58d217aa8759156340e741dbdb9a1588ebd175833a393b2ff3c57e: Status 404 returned error can't find the container with id 6c7bd7ba3d58d217aa8759156340e741dbdb9a1588ebd175833a393b2ff3c57e Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.351688 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cffb688a-2382-4a7a-85e8-f93ecbb27a02-tmp-dir\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.351776 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.351827 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9p\" (UniqueName: \"kubernetes.io/projected/cffb688a-2382-4a7a-85e8-f93ecbb27a02-kube-api-access-vmx9p\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.351848 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cffb688a-2382-4a7a-85e8-f93ecbb27a02-config-volume\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.351890 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cffb688a-2382-4a7a-85e8-f93ecbb27a02-metrics-tls\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.352800 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cffb688a-2382-4a7a-85e8-f93ecbb27a02-tmp-dir\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.353102 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.853087754 +0000 UTC m=+107.964623644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.354563 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cffb688a-2382-4a7a-85e8-f93ecbb27a02-config-volume\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.359971 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cffb688a-2382-4a7a-85e8-f93ecbb27a02-metrics-tls\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.369625 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmx9p\" (UniqueName: \"kubernetes.io/projected/cffb688a-2382-4a7a-85e8-f93ecbb27a02-kube-api-access-vmx9p\") pod \"dns-default-2rpjg\" (UID: \"cffb688a-2382-4a7a-85e8-f93ecbb27a02\") " pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.414323 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.453454 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.454021 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.953999152 +0000 UTC m=+108.065535042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.545735 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.555598 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.556151 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.056127314 +0000 UTC m=+108.167663204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.590182 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c28gc"] Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.590917 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" event={"ID":"49962195-77dc-47ef-a7dc-e9c1631d049d","Type":"ContainerStarted","Data":"6c7bd7ba3d58d217aa8759156340e741dbdb9a1588ebd175833a393b2ff3c57e"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.593727 5117 generic.go:358] "Generic (PLEG): container finished" podID="bc268a8d-137f-49eb-bb96-b696fdf66ccc" containerID="19044fe6bc8328881d6bea81a7f68ea45bf68135e81bf02c5d49029ca5ba1f7d" exitCode=0 Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.593972 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" event={"ID":"bc268a8d-137f-49eb-bb96-b696fdf66ccc","Type":"ContainerDied","Data":"19044fe6bc8328881d6bea81a7f68ea45bf68135e81bf02c5d49029ca5ba1f7d"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.606270 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" event={"ID":"fbfdc6c4-be51-4e2c-8ed3-44424ccde813","Type":"ContainerStarted","Data":"8791a1a529673392c9d49e0727e6644d0d67748769ca824486ddce60e8c8bbd1"} Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.607704 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" event={"ID":"71716c09-9759-4c82-a34c-d20b59b0ed78","Type":"ContainerStarted","Data":"d93ebf156d593d0ac7cc2dbd321f7c20830546fd32ba0ddbec4bf5d68c5ec5e0"} Jan 30 00:12:24 crc kubenswrapper[5117]: W0130 00:12:24.619608 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f4cf379_6e53_4fc8_8527_4e80b9aaccbe.slice/crio-49c7513c0d7cfa7674763fe778788e48de0c63a274a007125ec6efeebc398429 WatchSource:0}: Error finding container 49c7513c0d7cfa7674763fe778788e48de0c63a274a007125ec6efeebc398429: Status 404 returned error can't find the container with id 49c7513c0d7cfa7674763fe778788e48de0c63a274a007125ec6efeebc398429 Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.656791 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.657221 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.157173016 +0000 UTC m=+108.268708906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.735561 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2rpjg"] Jan 30 00:12:24 crc kubenswrapper[5117]: W0130 00:12:24.741881 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcffb688a_2382_4a7a_85e8_f93ecbb27a02.slice/crio-993b741795295eb32b1cfa5b2224d2ae2f1758d34d883161af8fddb4266df364 WatchSource:0}: Error finding container 993b741795295eb32b1cfa5b2224d2ae2f1758d34d883161af8fddb4266df364: Status 404 returned error can't find the container with id 993b741795295eb32b1cfa5b2224d2ae2f1758d34d883161af8fddb4266df364 Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.758486 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.758938 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.258911178 +0000 UTC m=+108.370447108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.860162 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.860199 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.360175275 +0000 UTC m=+108.471711165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.860512 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.860949 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.360930537 +0000 UTC m=+108.472466427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.945926 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-b52fx" podStartSLOduration=83.945890095 podStartE2EDuration="1m23.945890095s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:24.942807637 +0000 UTC m=+108.054343527" watchObservedRunningTime="2026-01-30 00:12:24.945890095 +0000 UTC m=+108.057426025" Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.962735 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.962864 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.462824602 +0000 UTC m=+108.574360492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.963201 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:24 crc kubenswrapper[5117]: E0130 00:12:24.963923 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.463916763 +0000 UTC m=+108.575452643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5117]: I0130 00:12:24.970980 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-dvncc" podStartSLOduration=83.970946542 podStartE2EDuration="1m23.970946542s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:24.96593788 +0000 UTC m=+108.077473780" watchObservedRunningTime="2026-01-30 00:12:24.970946542 +0000 UTC m=+108.082482432" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.073728 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.076829 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.576774449 +0000 UTC m=+108.688310379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.176910 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.177856 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.677836371 +0000 UTC m=+108.789372261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.284696 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.285922 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.78588312 +0000 UTC m=+108.897419040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.355974 5117 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-g7kqs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.356054 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.357203 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-mq4qt" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.357249 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.359938 5117 patch_prober.go:28] interesting pod/console-operator-67c89758df-mmnjm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.359994 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-mq4qt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.360068 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mq4qt" podUID="e161fe62-f260-4253-a91c-00d71e12cd51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.360000 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" podUID="609de4f2-7b79-438a-b5c5-a2650396bc23" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.386999 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.387557 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.887528639 +0000 UTC m=+108.999064599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.413862 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6xn7s" podStartSLOduration=84.413836662 podStartE2EDuration="1m24.413836662s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.375804968 +0000 UTC m=+108.487340878" watchObservedRunningTime="2026-01-30 00:12:25.413836662 +0000 UTC m=+108.525372552" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.438137 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4bnp9" podStartSLOduration=84.438103427 podStartE2EDuration="1m24.438103427s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.414360417 +0000 UTC m=+108.525896317" watchObservedRunningTime="2026-01-30 00:12:25.438103427 +0000 UTC m=+108.549639327" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.488354 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.488597 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.988562401 +0000 UTC m=+109.100098291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.490284 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.490962 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.990918757 +0000 UTC m=+109.102454797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.497169 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" podStartSLOduration=84.497143803 podStartE2EDuration="1m24.497143803s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.477581581 +0000 UTC m=+108.589117471" watchObservedRunningTime="2026-01-30 00:12:25.497143803 +0000 UTC m=+108.608679693" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.497566 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-mq4qt" podStartSLOduration=84.497558675 podStartE2EDuration="1m24.497558675s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.490270599 +0000 UTC m=+108.601806499" watchObservedRunningTime="2026-01-30 00:12:25.497558675 +0000 UTC m=+108.609094565" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.499793 5117 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-cgnvb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.499866 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.530737 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lj2tn" podStartSLOduration=84.530713711 podStartE2EDuration="1m24.530713711s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.530241207 +0000 UTC m=+108.641777097" watchObservedRunningTime="2026-01-30 00:12:25.530713711 +0000 UTC m=+108.642249601" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.531035 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-zzzhq" podStartSLOduration=84.531031119 podStartE2EDuration="1m24.531031119s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.504014027 +0000 UTC m=+108.615549917" watchObservedRunningTime="2026-01-30 00:12:25.531031119 +0000 UTC m=+108.642567009" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.555185 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" podStartSLOduration=84.555162101 podStartE2EDuration="1m24.555162101s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.554371218 +0000 UTC m=+108.665907118" watchObservedRunningTime="2026-01-30 00:12:25.555162101 +0000 UTC m=+108.666697991" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.593256 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.593976 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.093934385 +0000 UTC m=+109.205470275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.613087 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2rpjg" event={"ID":"cffb688a-2382-4a7a-85e8-f93ecbb27a02","Type":"ContainerStarted","Data":"993b741795295eb32b1cfa5b2224d2ae2f1758d34d883161af8fddb4266df364"} Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.614534 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" event={"ID":"3c334d22-8d3f-4478-80b3-d3f4049c533f","Type":"ContainerStarted","Data":"beeab463f76a8f813d1601f912f856503df47ca6e8cdb6ad6f19c7dabbe9b6e5"} Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.615851 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" event={"ID":"e148c5fe-c209-4e41-82bb-aa78a79c0d66","Type":"ContainerStarted","Data":"0112c40430cf92af61d56b79ef49ad18f0b6e2927475accf9ea8aa4646bbeb87"} Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.616977 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" event={"ID":"be4ec378-78db-4de0-ae65-691720b18b85","Type":"ContainerStarted","Data":"34b0509d506d6de3e025b557a21a021365d73753ab982185c58125adcc04804f"} Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.618311 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" event={"ID":"a84db473-48ab-4f4b-a46c-a62c4db95393","Type":"ContainerStarted","Data":"2ceaa7b8f8652436c8e015523b0e06c535693100ec35c6de1ba5f54dfe856162"} Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.619093 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" event={"ID":"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe","Type":"ContainerStarted","Data":"49c7513c0d7cfa7674763fe778788e48de0c63a274a007125ec6efeebc398429"} Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.673007 5117 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-g7kqs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.673064 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.694661 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.695161 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.195128211 +0000 UTC m=+109.306664141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.796232 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.796415 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.296387639 +0000 UTC m=+109.407923529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.796608 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.797781 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.297759948 +0000 UTC m=+109.409295838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.899065 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.899246 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.399217011 +0000 UTC m=+109.510752911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.899492 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:25 crc kubenswrapper[5117]: E0130 00:12:25.899859 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.399849619 +0000 UTC m=+109.511385509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.910000 5117 patch_prober.go:28] interesting pod/console-operator-67c89758df-mmnjm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.910040 5117 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-cgnvb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.910074 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" podUID="609de4f2-7b79-438a-b5c5-a2650396bc23" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.910100 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-mq4qt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.910131 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.910192 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mq4qt" podUID="e161fe62-f260-4253-a91c-00d71e12cd51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.911213 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.911677 5117 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-pgbnh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.911726 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" podUID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Jan 30 00:12:25 crc kubenswrapper[5117]: I0130 00:12:25.931141 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-9c26x" podStartSLOduration=84.931112261 podStartE2EDuration="1m24.931112261s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.925993787 +0000 UTC m=+109.037529697" watchObservedRunningTime="2026-01-30 00:12:25.931112261 +0000 UTC m=+109.042648151" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.001182 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.001357 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.501328633 +0000 UTC m=+109.612864533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.002583 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.002917 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.502900197 +0000 UTC m=+109.614436077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.105493 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.105894 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.605856743 +0000 UTC m=+109.717392653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.106003 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.106421 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.606410309 +0000 UTC m=+109.717946209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.207508 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.207780 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.707745369 +0000 UTC m=+109.819281259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.208306 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.208834 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.708825979 +0000 UTC m=+109.820361869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.309909 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.310168 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.810117898 +0000 UTC m=+109.921653798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.310970 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.311377 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.811365444 +0000 UTC m=+109.922901404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.414662 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.414793 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.914768542 +0000 UTC m=+110.026304432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.415230 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.415980 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.915972026 +0000 UTC m=+110.027507916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.516101 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.516524 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.016503843 +0000 UTC m=+110.128039733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.618246 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.618750 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.118729908 +0000 UTC m=+110.230265798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.653438 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-ngpdz" event={"ID":"7370f172-a96c-42c9-971b-76b5ef52303e","Type":"ContainerStarted","Data":"3f9a11ef5868a7b98f073d251e52c71f53625139b249dc37c4cf10406791dacc"} Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.655308 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" event={"ID":"44b89803-3ace-4031-9267-19e85991373e","Type":"ContainerStarted","Data":"197c40ed2eb7bc4715659a6b367fa0fb37ed3b5ef448b7f8b2e021bc118793ca"} Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.659711 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" event={"ID":"11e6a64e-9963-4871-9f58-956f659aec4a","Type":"ContainerStarted","Data":"f31b2a2487bcc24299ff8b8cb9055f4b923d586f684eb69e98b4ff490d90721f"} Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.662486 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" event={"ID":"d3fceb33-fc7b-410d-bb5f-2332207d4d62","Type":"ContainerStarted","Data":"6cc123ed2d68c04b9a0cf0f74b6e887ab756b7bf64ca87754d3a379ebaa132c2"} Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.664064 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" event={"ID":"46da46fd-f439-48fe-88ef-5cfeb085e371","Type":"ContainerStarted","Data":"c74a198e5f6315b7bf4875d3999121b25b9901d948229f2a0d1d8db073f0bca7"} Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.665459 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" event={"ID":"88bd31dd-a6a3-4f38-8459-0d1be720d2ba","Type":"ContainerStarted","Data":"21435b71e36e1c89321c2eb5814c62e4a0cdadcec77f61644566aec7bf6c8955"} Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.669543 5117 patch_prober.go:28] interesting pod/console-operator-67c89758df-mmnjm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.669609 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" podUID="609de4f2-7b79-438a-b5c5-a2650396bc23" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.686935 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-lzcwf" podStartSLOduration=85.686912502 podStartE2EDuration="1m25.686912502s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:25.993284536 +0000 UTC m=+109.104820436" watchObservedRunningTime="2026-01-30 00:12:26.686912502 +0000 UTC m=+109.798448402" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.688179 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29495520-ngpdz" podStartSLOduration=85.688172578 podStartE2EDuration="1m25.688172578s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.685508082 +0000 UTC m=+109.797043992" watchObservedRunningTime="2026-01-30 00:12:26.688172578 +0000 UTC m=+109.799708458" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.720028 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.720264 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.220223032 +0000 UTC m=+110.331758922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.720860 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.721267 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.221249181 +0000 UTC m=+110.332785071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.764067 5117 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-pgbnh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.764128 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" podUID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.764219 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.764293 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-mq4qt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.764328 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mq4qt" podUID="e161fe62-f260-4253-a91c-00d71e12cd51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.765231 5117 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-vxvgr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.765261 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" podUID="44b89803-3ace-4031-9267-19e85991373e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.780728 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podStartSLOduration=85.780683189 podStartE2EDuration="1m25.780683189s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.778290091 +0000 UTC m=+109.889825981" watchObservedRunningTime="2026-01-30 00:12:26.780683189 +0000 UTC m=+109.892219079" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.793987 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" podStartSLOduration=85.793967813 podStartE2EDuration="1m25.793967813s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.792352928 +0000 UTC m=+109.903888818" watchObservedRunningTime="2026-01-30 00:12:26.793967813 +0000 UTC m=+109.905503703" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.811604 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4mqgt" podStartSLOduration=85.81156711 podStartE2EDuration="1m25.81156711s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.810327895 +0000 UTC m=+109.921863785" watchObservedRunningTime="2026-01-30 00:12:26.81156711 +0000 UTC m=+109.923103000" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.822522 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.823308 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.32326626 +0000 UTC m=+110.434802150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.830254 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-nk8lc" podStartSLOduration=85.830236857 podStartE2EDuration="1m25.830236857s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.828635512 +0000 UTC m=+109.940171402" watchObservedRunningTime="2026-01-30 00:12:26.830236857 +0000 UTC m=+109.941772747" Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.925049 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:26 crc kubenswrapper[5117]: E0130 00:12:26.926235 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.426218366 +0000 UTC m=+110.537754256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5117]: I0130 00:12:26.933767 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" podStartSLOduration=85.933748279 podStartE2EDuration="1m25.933748279s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:26.931379952 +0000 UTC m=+110.042915842" watchObservedRunningTime="2026-01-30 00:12:26.933748279 +0000 UTC m=+110.045284169" Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.028278 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.028896 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.528859543 +0000 UTC m=+110.640395433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.085574 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.088552 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.088643 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.129813 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.130211 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.630196993 +0000 UTC m=+110.741732883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.240216 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.240584 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.740559368 +0000 UTC m=+110.852095258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.341765 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.342228 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.842208867 +0000 UTC m=+110.953744757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.444352 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.445152 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.945123732 +0000 UTC m=+111.056659622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.546057 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.546437 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.046423421 +0000 UTC m=+111.157959311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.647831 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.648005 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.147970577 +0000 UTC m=+111.259506467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.648429 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.648781 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.148772129 +0000 UTC m=+111.260308019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.675253 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" event={"ID":"8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95","Type":"ContainerStarted","Data":"f15fa2accc0ef3f3e95df7469227cbb6b3d2b0045288bacf836662a6bcab26cb"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.677024 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" event={"ID":"c273d9d0-bf2b-4efa-a942-42c772dc7f20","Type":"ContainerStarted","Data":"4532542b11fd68e5e98655f7daa7f6b6377dccd0b99eec2a0cfbcc0d480e4766"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.681308 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" event={"ID":"92f91bd9-b566-4246-9ac7-9a591ec358b9","Type":"ContainerStarted","Data":"50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.687022 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" event={"ID":"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9","Type":"ContainerStarted","Data":"8e3d42e3b7f1a13f57235504813c9dad7f6195286dc4eb4491490ac6dc6f6352"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.689337 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" event={"ID":"b759bca6-26d8-4e5b-8401-00a6be292d4d","Type":"ContainerStarted","Data":"f1e66633365f66badc6b4a40fdb7de6e1a80a047da5f063514a8c8c1cc8b6769"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.690290 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" event={"ID":"f040142e-c8d1-4bcc-87e7-f96ed272260f","Type":"ContainerStarted","Data":"4b159d9b4ef03eddf831fd2fc331eddebd26ab1dcd2325a7df9a4074b13bd7e4"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.691217 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9jt6p" event={"ID":"c5d7d859-970a-4c9e-ba49-e2fb1facde62","Type":"ContainerStarted","Data":"e2a4c97769881c8d3ef99d6b8e6345ab50b045cef1e919977d4a93f90a994a67"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.699579 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-8r5cz" event={"ID":"55051c39-0e72-4600-aa52-65bf35260f75","Type":"ContainerStarted","Data":"99204e78de8fa51138222f317dc29fa2ea8a3bf28c3cf7a076be3ae60a80a9f7"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.702965 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" event={"ID":"16dc898b-ab99-4df1-a84e-a3d57d7ccd84","Type":"ContainerStarted","Data":"d294584fea24cfec2349de4126a5743bd3243bc528b20842709da8f8dd7bc0f2"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.705529 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-85jpm" event={"ID":"484b7da9-d77c-4b22-a8b3-ae64ca9a1ff3","Type":"ContainerStarted","Data":"c7681186ef0139bce5c9bc8f111602ce48f49e6fe04e83457bfdb589d1ffc977"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.724749 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" event={"ID":"b565a4ec-53a5-4d82-bc5a-3f216a85bcfa","Type":"ContainerStarted","Data":"726c3e3ed965248522e8c0947bb61fac4accaa21e9e98420609eed00e93cb9ee"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.726507 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" event={"ID":"d11c8ac2-e86d-43b9-8985-ecfe6fb305ba","Type":"ContainerStarted","Data":"f35d729666812dbe773d5c85896b241a63adb9cf10ed107f317307004a09c578"} Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.749230 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.749350 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.249326107 +0000 UTC m=+111.360861997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.749642 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.750126 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.2501164 +0000 UTC m=+111.361652290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.851077 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.851187 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.351160542 +0000 UTC m=+111.462696432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.851384 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.851729 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.351719797 +0000 UTC m=+111.463255687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.948927 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.948942 5117 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-vxvgr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.949028 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" podUID="44b89803-3ace-4031-9267-19e85991373e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.950369 5117 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-f65lp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.950454 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.953029 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.953188 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.453167121 +0000 UTC m=+111.564703011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5117]: I0130 00:12:27.954401 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:27 crc kubenswrapper[5117]: E0130 00:12:27.954647 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.454628922 +0000 UTC m=+111.566164812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.001140 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" podStartSLOduration=87.001120114 podStartE2EDuration="1m27.001120114s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:28.000737383 +0000 UTC m=+111.112273283" watchObservedRunningTime="2026-01-30 00:12:28.001120114 +0000 UTC m=+111.112656004" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.013083 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v7kvx" podStartSLOduration=87.01303965 podStartE2EDuration="1m27.01303965s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:27.966693332 +0000 UTC m=+111.078229222" watchObservedRunningTime="2026-01-30 00:12:28.01303965 +0000 UTC m=+111.124575540" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.039287 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9jt6p" podStartSLOduration=9.039265821 podStartE2EDuration="9.039265821s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:28.036836832 +0000 UTC m=+111.148372742" watchObservedRunningTime="2026-01-30 00:12:28.039265821 +0000 UTC m=+111.150801711" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.040177 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" podStartSLOduration=87.040168686 podStartE2EDuration="1m27.040168686s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:28.019788791 +0000 UTC m=+111.131324681" watchObservedRunningTime="2026-01-30 00:12:28.040168686 +0000 UTC m=+111.151704576" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.056674 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.056874 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.556838577 +0000 UTC m=+111.668374467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.056954 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" podStartSLOduration=87.056932829 podStartE2EDuration="1m27.056932829s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:28.056039584 +0000 UTC m=+111.167575464" watchObservedRunningTime="2026-01-30 00:12:28.056932829 +0000 UTC m=+111.168468719" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.057304 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.057751 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.557731512 +0000 UTC m=+111.669267422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.087372 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.087457 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.168893 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.169192 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.669166747 +0000 UTC m=+111.780702637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.251876 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-8r5cz" podStartSLOduration=87.251854711 podStartE2EDuration="1m27.251854711s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:28.249231267 +0000 UTC m=+111.360767157" watchObservedRunningTime="2026-01-30 00:12:28.251854711 +0000 UTC m=+111.363390601" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.265122 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-85jpm" podStartSLOduration=9.265105555 podStartE2EDuration="9.265105555s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:28.262930083 +0000 UTC m=+111.374466003" watchObservedRunningTime="2026-01-30 00:12:28.265105555 +0000 UTC m=+111.376641445" Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.269910 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.270824 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.770802756 +0000 UTC m=+111.882338646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.371001 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.371240 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.871202059 +0000 UTC m=+111.982737949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.371760 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.372207 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.872191867 +0000 UTC m=+111.983727757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.472893 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.473103 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.973056364 +0000 UTC m=+112.084592294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.473292 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.473883 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.973852637 +0000 UTC m=+112.085388527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.574712 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.574947 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.074909979 +0000 UTC m=+112.186445869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.575204 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.575662 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.07565308 +0000 UTC m=+112.187188970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.676881 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.677139 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.177094793 +0000 UTC m=+112.288630703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.677449 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.677854 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.177838674 +0000 UTC m=+112.289374554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.733931 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" event={"ID":"fbfdc6c4-be51-4e2c-8ed3-44424ccde813","Type":"ContainerStarted","Data":"b476e64760bcbfbbeb7e7720e303baa3173d556ad517a163451588941fcedc10"} Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.735811 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" event={"ID":"71716c09-9759-4c82-a34c-d20b59b0ed78","Type":"ContainerStarted","Data":"7c3df48386454c51f2abb90af70a180dbad25f78786af876bf0e5f8e8aee8028"} Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.737041 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q7tcw" event={"ID":"a09afae3-bd41-4f19-af49-34689367f229","Type":"ContainerStarted","Data":"2636bbfaa68775cca14b8c587f5c022f765d0637c98426cd7dc706bc2959dbe6"} Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.738083 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" event={"ID":"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091","Type":"ContainerStarted","Data":"1596f4fbc0764d5b340e2d3f58f9e96ed27af0aa3a7199c5d4f52a66c46fb4ae"} Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.739603 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" event={"ID":"70053511-152f-4649-a478-cbce9a4bd8e5","Type":"ContainerStarted","Data":"ebd8b5e09e71cc499bc3e5e00c030064a0eaba3ddc59001a9c80c03fea52a89f"} Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.778839 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.778999 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.278970088 +0000 UTC m=+112.390505988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.779150 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.779495 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.279486433 +0000 UTC m=+112.391022323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.880634 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.880858 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.380819383 +0000 UTC m=+112.492355273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.881344 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.881677 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.381668357 +0000 UTC m=+112.493204247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.930800 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:28 crc kubenswrapper[5117]: I0130 00:12:28.982486 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5117]: E0130 00:12:28.982732 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.482683428 +0000 UTC m=+112.594219318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.086665 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.087334 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.587310871 +0000 UTC m=+112.698846761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.099337 5117 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-vxvgr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.100200 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" podUID="44b89803-3ace-4031-9267-19e85991373e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.101129 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:29 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:29 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:29 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.101357 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.108471 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.121921 5117 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-f65lp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.121989 5117 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-8x2r4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.122000 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.121929 5117 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-fw9pw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.122058 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" podUID="b565a4ec-53a5-4d82-bc5a-3f216a85bcfa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.122090 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" podUID="f040142e-c8d1-4bcc-87e7-f96ed272260f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.123134 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.123421 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.129640 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.129688 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.129737 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.189684 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.189829 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.689795253 +0000 UTC m=+112.801331143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.190149 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b10eadce-2a63-452a-b132-2d0258dca591-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.190194 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.190530 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10eadce-2a63-452a-b132-2d0258dca591-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.192214 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.692205081 +0000 UTC m=+112.803740971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.283018 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-xw2m5" podStartSLOduration=88.282997444 podStartE2EDuration="1m28.282997444s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.28144161 +0000 UTC m=+112.392977500" watchObservedRunningTime="2026-01-30 00:12:29.282997444 +0000 UTC m=+112.394533334" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.292345 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.292471 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10eadce-2a63-452a-b132-2d0258dca591-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.292521 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b10eadce-2a63-452a-b132-2d0258dca591-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.292676 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b10eadce-2a63-452a-b132-2d0258dca591-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.292771 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.792752649 +0000 UTC m=+112.904288539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.308125 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q7tcw" podStartSLOduration=88.308110133 podStartE2EDuration="1m28.308110133s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.30589512 +0000 UTC m=+112.417431010" watchObservedRunningTime="2026-01-30 00:12:29.308110133 +0000 UTC m=+112.419646023" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.329201 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7zgrx" podStartSLOduration=88.329179287 podStartE2EDuration="1m28.329179287s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.327176031 +0000 UTC m=+112.438711931" watchObservedRunningTime="2026-01-30 00:12:29.329179287 +0000 UTC m=+112.440715177" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.337528 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10eadce-2a63-452a-b132-2d0258dca591-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.366007 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" podStartSLOduration=88.365986926 podStartE2EDuration="1m28.365986926s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.363107955 +0000 UTC m=+112.474643845" watchObservedRunningTime="2026-01-30 00:12:29.365986926 +0000 UTC m=+112.477522816" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.396618 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.397070 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.897046803 +0000 UTC m=+113.008582683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.445215 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.500424 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.500703 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.000667837 +0000 UTC m=+113.112203727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.601493 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.602040 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.102014798 +0000 UTC m=+113.213550688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.702642 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.703095 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.203050539 +0000 UTC m=+113.314586429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.703776 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.704163 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.204153891 +0000 UTC m=+113.315689781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.764274 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" event={"ID":"c273d9d0-bf2b-4efa-a942-42c772dc7f20","Type":"ContainerStarted","Data":"b6e28daaf6a15b60e0d2fdc51ac0df883e163fab108b1b94de31263d848d24ee"} Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.766529 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" event={"ID":"49962195-77dc-47ef-a7dc-e9c1631d049d","Type":"ContainerStarted","Data":"19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769"} Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.766556 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.767563 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" event={"ID":"88bd31dd-a6a3-4f38-8459-0d1be720d2ba","Type":"ContainerStarted","Data":"413f991e227486b09ca799b92dd4be3811f420d1309e3d7a86ad6c541d7aaaea"} Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.774141 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" event={"ID":"bc268a8d-137f-49eb-bb96-b696fdf66ccc","Type":"ContainerStarted","Data":"619eb8998578ea0bd6a40d7ca630592b5733cfec75163b6fa6eea9b7448ed888"} Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.774170 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.777122 5117 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-f65lp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.777167 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.778013 5117 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-fw9pw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.778078 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" podUID="f040142e-c8d1-4bcc-87e7-f96ed272260f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.778227 5117 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-8x2r4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.778259 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" podUID="b565a4ec-53a5-4d82-bc5a-3f216a85bcfa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.816086 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.820599 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.320566846 +0000 UTC m=+113.432102736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.855711 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" podStartSLOduration=88.855669647 podStartE2EDuration="1m28.855669647s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.84869957 +0000 UTC m=+112.960235470" watchObservedRunningTime="2026-01-30 00:12:29.855669647 +0000 UTC m=+112.967205537" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.856000 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" podStartSLOduration=10.855991356 podStartE2EDuration="10.855991356s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.798150744 +0000 UTC m=+112.909686634" watchObservedRunningTime="2026-01-30 00:12:29.855991356 +0000 UTC m=+112.967527246" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.880330 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-wtlqb" podStartSLOduration=88.880312142 podStartE2EDuration="1m28.880312142s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.874066546 +0000 UTC m=+112.985602436" watchObservedRunningTime="2026-01-30 00:12:29.880312142 +0000 UTC m=+112.991848032" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.925791 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" podStartSLOduration=88.925763625 podStartE2EDuration="1m28.925763625s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:29.903145177 +0000 UTC m=+113.014681077" watchObservedRunningTime="2026-01-30 00:12:29.925763625 +0000 UTC m=+113.037299515" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.928393 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:29 crc kubenswrapper[5117]: E0130 00:12:29.930439 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.430418727 +0000 UTC m=+113.541954617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.956139 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.979426 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:29 crc kubenswrapper[5117]: I0130 00:12:29.979497 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.004477 5117 patch_prober.go:28] interesting pod/apiserver-8596bd845d-s2hrs container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.25:8443/livez\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.004932 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" podUID="8d7e2e6e-3f6c-4c5a-9db5-fe994b14ba95" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.25:8443/livez\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.029521 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.031472 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.531439398 +0000 UTC m=+113.642975298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.089011 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:30 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:30 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:30 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.089092 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.131930 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.132280 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.632267444 +0000 UTC m=+113.743803334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.232898 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.233259 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.733218082 +0000 UTC m=+113.844753982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.233441 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.233936 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.733917322 +0000 UTC m=+113.845453222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.335155 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.335293 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.835264192 +0000 UTC m=+113.946800082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.335580 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.335992 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.835981942 +0000 UTC m=+113.947517832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.436495 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.436651 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.936615203 +0000 UTC m=+114.048151093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.437336 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.437866 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.937846487 +0000 UTC m=+114.049382387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.538441 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.538854 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.038831758 +0000 UTC m=+114.150367638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.640719 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.641175 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.141154486 +0000 UTC m=+114.252690366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.741460 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.741852 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.241830547 +0000 UTC m=+114.353366437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.759178 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.759234 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.761074 5117 patch_prober.go:28] interesting pod/console-64d44f6ddf-dvncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.761152 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-dvncc" podUID="3c09a221-05c5-4aa7-a59f-7501885dd323" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.778536 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-mq4qt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.778597 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mq4qt" podUID="e161fe62-f260-4253-a91c-00d71e12cd51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.780460 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" event={"ID":"b759bca6-26d8-4e5b-8401-00a6be292d4d","Type":"ContainerStarted","Data":"d5caa3c9614fb797a674d4354d1cd9d16622caab55b5e12cbd5200486918c0df"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.780615 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.781806 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" event={"ID":"a84db473-48ab-4f4b-a46c-a62c4db95393","Type":"ContainerStarted","Data":"eb66136f2ff138d64ae31656a66bc9217e2e69133375dc0e845d66415a8f09ba"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.783047 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" event={"ID":"16dc898b-ab99-4df1-a84e-a3d57d7ccd84","Type":"ContainerStarted","Data":"d174594a144328c2ce1a2f63a0cea779889d0040218b70d573d458008ba98416"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.784455 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" event={"ID":"abfbd0d0-cfec-4caf-aa18-2d2fb1beb091","Type":"ContainerStarted","Data":"5de5bc5bdfe6c2fae4d36bbd373764cc4cf067646a437b3c0866cc696efa8dbf"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.785499 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b10eadce-2a63-452a-b132-2d0258dca591","Type":"ContainerStarted","Data":"dcf9c112d215ff64b26e16d6a27c59699666042107a312423b5e3f5dbb0b2a21"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.785522 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b10eadce-2a63-452a-b132-2d0258dca591","Type":"ContainerStarted","Data":"fc9f20e4a595b8b99b2cccd39e47fd084a1881b29fd27e7ac61400a053161400"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.787297 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" event={"ID":"ffa6bede-24d1-4bc2-8b82-b7ebc48028b9","Type":"ContainerStarted","Data":"2789e0c5f28fb0424a92523d361ecb1d26863a00aa4ad4e88a567c342ae9ce8f"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.788916 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2rpjg" event={"ID":"cffb688a-2382-4a7a-85e8-f93ecbb27a02","Type":"ContainerStarted","Data":"8639c25304cd9e731656741fe49bee577609b6fa128b44591ef52bec17c2cdd0"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.788956 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2rpjg" event={"ID":"cffb688a-2382-4a7a-85e8-f93ecbb27a02","Type":"ContainerStarted","Data":"660f523587b2def92b143de651e8b328427f49eb4e631c3b8ffadba1fc8b760a"} Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.816164 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.826116 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6gn48" podStartSLOduration=89.826101915 podStartE2EDuration="1m29.826101915s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:30.82485516 +0000 UTC m=+113.936391080" watchObservedRunningTime="2026-01-30 00:12:30.826101915 +0000 UTC m=+113.937637805" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.827573 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" podStartSLOduration=89.827566617 podStartE2EDuration="1m29.827566617s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:30.804409523 +0000 UTC m=+113.915945423" watchObservedRunningTime="2026-01-30 00:12:30.827566617 +0000 UTC m=+113.939102507" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.843823 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.845596 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.345578795 +0000 UTC m=+114.457114765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.857009 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-mphjp" podStartSLOduration=89.856990077 podStartE2EDuration="1m29.856990077s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:30.855713251 +0000 UTC m=+113.967249161" watchObservedRunningTime="2026-01-30 00:12:30.856990077 +0000 UTC m=+113.968525967" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.877109 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-l64ns" podStartSLOduration=89.877089685 podStartE2EDuration="1m29.877089685s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:30.875590632 +0000 UTC m=+113.987126532" watchObservedRunningTime="2026-01-30 00:12:30.877089685 +0000 UTC m=+113.988625585" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.899199 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mb68g" podStartSLOduration=89.899178168 podStartE2EDuration="1m29.899178168s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:30.897583303 +0000 UTC m=+114.009119193" watchObservedRunningTime="2026-01-30 00:12:30.899178168 +0000 UTC m=+114.010714058" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.926883 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-nlhql" podStartSLOduration=89.92686668 podStartE2EDuration="1m29.92686668s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:30.924552454 +0000 UTC m=+114.036088354" watchObservedRunningTime="2026-01-30 00:12:30.92686668 +0000 UTC m=+114.038402570" Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.945212 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.945534 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.445478015 +0000 UTC m=+114.557013905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.945716 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:30 crc kubenswrapper[5117]: E0130 00:12:30.946102 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.446086462 +0000 UTC m=+114.557622352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5117]: I0130 00:12:30.971241 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2rpjg" podStartSLOduration=11.971225722 podStartE2EDuration="11.971225722s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:30.969195174 +0000 UTC m=+114.080731064" watchObservedRunningTime="2026-01-30 00:12:30.971225722 +0000 UTC m=+114.082761612" Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.046473 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.046673 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.546617669 +0000 UTC m=+114.658153579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.047078 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.047448 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.547432132 +0000 UTC m=+114.658968022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.088192 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:31 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:31 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:31 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.088292 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.148856 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.149075 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.64904205 +0000 UTC m=+114.760577940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.149540 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.149899 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.649891054 +0000 UTC m=+114.761426944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.172086 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dtfxb"] Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.251263 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.251419 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.751389399 +0000 UTC m=+114.862925289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.251625 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.251920 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.751913044 +0000 UTC m=+114.863448924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.353056 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.353290 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.853250504 +0000 UTC m=+114.964786394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.353874 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.354302 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.854282283 +0000 UTC m=+114.965818173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.454763 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.455060 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.955041017 +0000 UTC m=+115.066576907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.556773 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.557309 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.057272862 +0000 UTC m=+115.168808752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.658136 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.658383 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.158346475 +0000 UTC m=+115.269882365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.759475 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.759983 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.259958953 +0000 UTC m=+115.371494833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.795293 5117 generic.go:358] "Generic (PLEG): container finished" podID="b10eadce-2a63-452a-b132-2d0258dca591" containerID="dcf9c112d215ff64b26e16d6a27c59699666042107a312423b5e3f5dbb0b2a21" exitCode=0 Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.795380 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b10eadce-2a63-452a-b132-2d0258dca591","Type":"ContainerDied","Data":"dcf9c112d215ff64b26e16d6a27c59699666042107a312423b5e3f5dbb0b2a21"} Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.797616 5117 generic.go:358] "Generic (PLEG): container finished" podID="11e6a64e-9963-4871-9f58-956f659aec4a" containerID="f31b2a2487bcc24299ff8b8cb9055f4b923d586f684eb69e98b4ff490d90721f" exitCode=0 Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.797664 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" event={"ID":"11e6a64e-9963-4871-9f58-956f659aec4a","Type":"ContainerDied","Data":"f31b2a2487bcc24299ff8b8cb9055f4b923d586f684eb69e98b4ff490d90721f"} Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.799359 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.860880 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.867980 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.36793874 +0000 UTC m=+115.479474630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.869734 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.870955 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.370938935 +0000 UTC m=+115.482474825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.971116 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.971363 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.471321248 +0000 UTC m=+115.582857138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5117]: I0130 00:12:31.971537 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:31 crc kubenswrapper[5117]: E0130 00:12:31.972031 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.472009308 +0000 UTC m=+115.583545198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.037791 5117 scope.go:117] "RemoveContainer" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.072908 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.073130 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.573094921 +0000 UTC m=+115.684630811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.073412 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.073780 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.57377289 +0000 UTC m=+115.685308780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.085547 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.089701 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:32 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:32 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:32 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.089778 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.177021 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.177280 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.67723365 +0000 UTC m=+115.788769540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.177456 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.178585 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.678570078 +0000 UTC m=+115.790105968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.279398 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.280024 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.78000032 +0000 UTC m=+115.891536210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.302945 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38048: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.377193 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-26tjl"] Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.381555 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38062: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.381934 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.382403 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.88238291 +0000 UTC m=+115.993918800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.481715 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38066: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.483374 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.483580 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.983536345 +0000 UTC m=+116.095072245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.483739 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.484080 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.98406555 +0000 UTC m=+116.095601430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.506842 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38080: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.540986 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38088: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.567489 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26tjl"] Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.567539 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nfcw7"] Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.567723 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.570851 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.580010 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.585390 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.585582 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.085532404 +0000 UTC m=+116.197068294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.585832 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.586200 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.086185722 +0000 UTC m=+116.197721612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.594936 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfcw7"] Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.596843 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.618417 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38098: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.687127 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.687372 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.187327007 +0000 UTC m=+116.298862927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.687654 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-utilities\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.687808 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-utilities\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.687847 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97ptf\" (UniqueName: \"kubernetes.io/projected/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-kube-api-access-97ptf\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.687934 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45k5n\" (UniqueName: \"kubernetes.io/projected/fe73bcd6-db8f-4472-a65f-b7858304bc8b-kube-api-access-45k5n\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.688010 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.688035 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-catalog-content\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.688115 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-catalog-content\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.688418 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.188395687 +0000 UTC m=+116.299931577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.785680 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hpvcc"] Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.789444 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.789643 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.289610404 +0000 UTC m=+116.401146294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.789917 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-catalog-content\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.789994 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-utilities\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790104 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-utilities\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790165 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-97ptf\" (UniqueName: \"kubernetes.io/projected/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-kube-api-access-97ptf\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790557 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-catalog-content\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790475 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-utilities\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790605 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-utilities\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790659 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45k5n\" (UniqueName: \"kubernetes.io/projected/fe73bcd6-db8f-4472-a65f-b7858304bc8b-kube-api-access-45k5n\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790759 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.790795 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-catalog-content\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.791124 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-catalog-content\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.791393 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.291384114 +0000 UTC m=+116.402920004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.805314 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.821656 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpvcc"] Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.821711 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" event={"ID":"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe","Type":"ContainerStarted","Data":"c121b873137df070c15b86560076537d9067ee6f05f69ab351f353c4a5e508a4"} Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.821738 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c"} Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.823055 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" gracePeriod=30 Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.824284 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.824779 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.830256 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45k5n\" (UniqueName: \"kubernetes.io/projected/fe73bcd6-db8f-4472-a65f-b7858304bc8b-kube-api-access-45k5n\") pod \"certified-operators-26tjl\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.830257 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-97ptf\" (UniqueName: \"kubernetes.io/projected/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-kube-api-access-97ptf\") pod \"community-operators-nfcw7\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.856226 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38114: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.862571 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=29.862549472 podStartE2EDuration="29.862549472s" podCreationTimestamp="2026-01-30 00:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:32.861301507 +0000 UTC m=+115.972837417" watchObservedRunningTime="2026-01-30 00:12:32.862549472 +0000 UTC m=+115.974085362" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.883079 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.892925 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.893080 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-catalog-content\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.893131 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-utilities\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.893163 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhst2\" (UniqueName: \"kubernetes.io/projected/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-kube-api-access-nhst2\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.893815 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.393795254 +0000 UTC m=+116.505331144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.894271 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.917853 5117 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-nkcjt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.918263 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" podUID="bc268a8d-137f-49eb-bb96-b696fdf66ccc" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.969658 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b9hmp"] Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.994164 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-utilities\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.994214 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.994241 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhst2\" (UniqueName: \"kubernetes.io/projected/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-kube-api-access-nhst2\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.994309 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-catalog-content\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.994724 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-catalog-content\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: I0130 00:12:32.994949 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-utilities\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:32 crc kubenswrapper[5117]: E0130 00:12:32.995957 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.495942957 +0000 UTC m=+116.607478847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.035530 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhst2\" (UniqueName: \"kubernetes.io/projected/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-kube-api-access-nhst2\") pod \"certified-operators-hpvcc\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.045010 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.101267 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9hmp"] Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.103746 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:33 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:33 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:33 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.103843 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.106769 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.107208 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4j5p\" (UniqueName: \"kubernetes.io/projected/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-kube-api-access-n4j5p\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.107252 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-catalog-content\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.107322 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-utilities\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.107502 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.607474455 +0000 UTC m=+116.719010345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.200911 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.209089 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-utilities\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.209304 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.209434 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4j5p\" (UniqueName: \"kubernetes.io/projected/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-kube-api-access-n4j5p\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.209548 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-catalog-content\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.212482 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.712451208 +0000 UTC m=+116.823987098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.222276 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-catalog-content\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.231077 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-utilities\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.250987 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38116: no serving certificate available for the kubelet" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.258135 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4j5p\" (UniqueName: \"kubernetes.io/projected/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-kube-api-access-n4j5p\") pod \"community-operators-b9hmp\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.313456 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.314010 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.813990084 +0000 UTC m=+116.925525974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.357913 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.358458 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.417144 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlpsp\" (UniqueName: \"kubernetes.io/projected/11e6a64e-9963-4871-9f58-956f659aec4a-kube-api-access-rlpsp\") pod \"11e6a64e-9963-4871-9f58-956f659aec4a\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.417600 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10eadce-2a63-452a-b132-2d0258dca591-kube-api-access\") pod \"b10eadce-2a63-452a-b132-2d0258dca591\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.417635 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11e6a64e-9963-4871-9f58-956f659aec4a-secret-volume\") pod \"11e6a64e-9963-4871-9f58-956f659aec4a\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.417670 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11e6a64e-9963-4871-9f58-956f659aec4a-config-volume\") pod \"11e6a64e-9963-4871-9f58-956f659aec4a\" (UID: \"11e6a64e-9963-4871-9f58-956f659aec4a\") " Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.417871 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b10eadce-2a63-452a-b132-2d0258dca591-kubelet-dir\") pod \"b10eadce-2a63-452a-b132-2d0258dca591\" (UID: \"b10eadce-2a63-452a-b132-2d0258dca591\") " Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.418033 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.418369 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:33.918355179 +0000 UTC m=+117.029891069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.419849 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e6a64e-9963-4871-9f58-956f659aec4a-config-volume" (OuterVolumeSpecName: "config-volume") pod "11e6a64e-9963-4871-9f58-956f659aec4a" (UID: "11e6a64e-9963-4871-9f58-956f659aec4a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.420480 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b10eadce-2a63-452a-b132-2d0258dca591-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b10eadce-2a63-452a-b132-2d0258dca591" (UID: "b10eadce-2a63-452a-b132-2d0258dca591"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.425977 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.441869 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10eadce-2a63-452a-b132-2d0258dca591-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b10eadce-2a63-452a-b132-2d0258dca591" (UID: "b10eadce-2a63-452a-b132-2d0258dca591"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.443331 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e6a64e-9963-4871-9f58-956f659aec4a-kube-api-access-rlpsp" (OuterVolumeSpecName: "kube-api-access-rlpsp") pod "11e6a64e-9963-4871-9f58-956f659aec4a" (UID: "11e6a64e-9963-4871-9f58-956f659aec4a"). InnerVolumeSpecName "kube-api-access-rlpsp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.450881 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e6a64e-9963-4871-9f58-956f659aec4a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11e6a64e-9963-4871-9f58-956f659aec4a" (UID: "11e6a64e-9963-4871-9f58-956f659aec4a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.518911 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.519456 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10eadce-2a63-452a-b132-2d0258dca591-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.519533 5117 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11e6a64e-9963-4871-9f58-956f659aec4a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.519590 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11e6a64e-9963-4871-9f58-956f659aec4a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.519664 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b10eadce-2a63-452a-b132-2d0258dca591-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.519753 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rlpsp\" (UniqueName: \"kubernetes.io/projected/11e6a64e-9963-4871-9f58-956f659aec4a-kube-api-access-rlpsp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.519884 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.019863784 +0000 UTC m=+117.131399674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.564799 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26tjl"] Jan 30 00:12:33 crc kubenswrapper[5117]: W0130 00:12:33.602399 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe73bcd6_db8f_4472_a65f_b7858304bc8b.slice/crio-178a3c82d89cf257a117a46fb20a2c5929de68bd7ccbeb7ca50804c5d1c81fa5 WatchSource:0}: Error finding container 178a3c82d89cf257a117a46fb20a2c5929de68bd7ccbeb7ca50804c5d1c81fa5: Status 404 returned error can't find the container with id 178a3c82d89cf257a117a46fb20a2c5929de68bd7ccbeb7ca50804c5d1c81fa5 Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.605615 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfcw7"] Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.622389 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.622748 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.122733918 +0000 UTC m=+117.234269808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: W0130 00:12:33.634833 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48b76cf6_e8bb_4fb2_92bd_4b1718a794f6.slice/crio-62855ab264c18230ad0baa7bb62ae64cc53c48103b381d3d2afcb8a2dd3efc06 WatchSource:0}: Error finding container 62855ab264c18230ad0baa7bb62ae64cc53c48103b381d3d2afcb8a2dd3efc06: Status 404 returned error can't find the container with id 62855ab264c18230ad0baa7bb62ae64cc53c48103b381d3d2afcb8a2dd3efc06 Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.723427 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.723688 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.223664895 +0000 UTC m=+117.335200775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.780717 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9hmp"] Jan 30 00:12:33 crc kubenswrapper[5117]: W0130 00:12:33.794968 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e9f7c6_ffd2_40f7_82fa_9fab50710838.slice/crio-2a38f28bc991fb110b4f8df51cdb83df91c9aab40bc1c50e5474ce6fa2cee8ff WatchSource:0}: Error finding container 2a38f28bc991fb110b4f8df51cdb83df91c9aab40bc1c50e5474ce6fa2cee8ff: Status 404 returned error can't find the container with id 2a38f28bc991fb110b4f8df51cdb83df91c9aab40bc1c50e5474ce6fa2cee8ff Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.823904 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" event={"ID":"11e6a64e-9963-4871-9f58-956f659aec4a","Type":"ContainerDied","Data":"a01fffac5f3abd4d159908b27bb52273d95287dad1bde8b649dc888f00d35da0"} Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.823945 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01fffac5f3abd4d159908b27bb52273d95287dad1bde8b649dc888f00d35da0" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.824023 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-9nb7k" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.824049 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpvcc"] Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.824723 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.825080 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.325066117 +0000 UTC m=+117.436602007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.828762 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b10eadce-2a63-452a-b132-2d0258dca591","Type":"ContainerDied","Data":"fc9f20e4a595b8b99b2cccd39e47fd084a1881b29fd27e7ac61400a053161400"} Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.828794 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9f20e4a595b8b99b2cccd39e47fd084a1881b29fd27e7ac61400a053161400" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.828810 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.833081 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9hmp" event={"ID":"d8e9f7c6-ffd2-40f7-82fa-9fab50710838","Type":"ContainerStarted","Data":"2a38f28bc991fb110b4f8df51cdb83df91c9aab40bc1c50e5474ce6fa2cee8ff"} Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.838248 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26tjl" event={"ID":"fe73bcd6-db8f-4472-a65f-b7858304bc8b","Type":"ContainerStarted","Data":"178a3c82d89cf257a117a46fb20a2c5929de68bd7ccbeb7ca50804c5d1c81fa5"} Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.839287 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcw7" event={"ID":"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6","Type":"ContainerStarted","Data":"62855ab264c18230ad0baa7bb62ae64cc53c48103b381d3d2afcb8a2dd3efc06"} Jan 30 00:12:33 crc kubenswrapper[5117]: W0130 00:12:33.847618 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b849f_183c_4227_9dc6_a7dc0d8a6a81.slice/crio-2b44cc134c1be33631a8d3969d85f043d85f6636f80f987c832026840f7862ce WatchSource:0}: Error finding container 2b44cc134c1be33631a8d3969d85f043d85f6636f80f987c832026840f7862ce: Status 404 returned error can't find the container with id 2b44cc134c1be33631a8d3969d85f043d85f6636f80f987c832026840f7862ce Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.886628 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.887638 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b10eadce-2a63-452a-b132-2d0258dca591" containerName="pruner" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.887656 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10eadce-2a63-452a-b132-2d0258dca591" containerName="pruner" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.887683 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11e6a64e-9963-4871-9f58-956f659aec4a" containerName="collect-profiles" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.887726 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e6a64e-9963-4871-9f58-956f659aec4a" containerName="collect-profiles" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.887815 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="11e6a64e-9963-4871-9f58-956f659aec4a" containerName="collect-profiles" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.887828 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="b10eadce-2a63-452a-b132-2d0258dca591" containerName="pruner" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.923864 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.924040 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.925400 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.926032 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.426006956 +0000 UTC m=+117.537542846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.928869 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:33 crc kubenswrapper[5117]: E0130 00:12:33.929234 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.429218577 +0000 UTC m=+117.540754467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.933526 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.933603 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:33 crc kubenswrapper[5117]: I0130 00:12:33.942351 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38130: no serving certificate available for the kubelet" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.032309 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.032506 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.532470991 +0000 UTC m=+117.644006871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.032646 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.032743 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.032795 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.033214 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.533206662 +0000 UTC m=+117.644742552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.099736 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:34 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:34 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:34 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.099795 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.134510 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.134648 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.134723 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.134865 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.134946 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.634927593 +0000 UTC m=+117.746463483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.174096 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.236039 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.236518 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.736478489 +0000 UTC m=+117.848014379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.252700 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.337383 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.337817 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.837796589 +0000 UTC m=+117.949332479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.368063 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x2hcj"] Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.372237 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.374656 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.382781 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2hcj"] Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.439137 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.439204 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.439294 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-utilities\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.439364 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.439384 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2xrg\" (UniqueName: \"kubernetes.io/projected/96d26479-7c9f-4877-afc4-338863fcdf4d-kube-api-access-h2xrg\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.439493 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-catalog-content\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.439735 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:34.939717165 +0000 UTC m=+118.051253055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.454194 5117 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-scnb9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]log ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]etcd ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/max-in-flight-filter ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 00:12:34 crc kubenswrapper[5117]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-startinformers ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 00:12:34 crc kubenswrapper[5117]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:12:34 crc kubenswrapper[5117]: livez check failed Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.454270 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" podUID="fbfdc6c4-be51-4e2c-8ed3-44424ccde813" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.541411 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.541571 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2xrg\" (UniqueName: \"kubernetes.io/projected/96d26479-7c9f-4877-afc4-338863fcdf4d-kube-api-access-h2xrg\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.541664 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-catalog-content\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.541796 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-utilities\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.541935 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.041899929 +0000 UTC m=+118.153435809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.543404 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-catalog-content\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.543653 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-utilities\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.567102 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2xrg\" (UniqueName: \"kubernetes.io/projected/96d26479-7c9f-4877-afc4-338863fcdf4d-kube-api-access-h2xrg\") pod \"redhat-marketplace-x2hcj\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.578440 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.643468 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.643843 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.143825266 +0000 UTC m=+118.255361156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.745122 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.745398 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.245378532 +0000 UTC m=+118.356914422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.748717 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.774407 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-95kbf"] Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.846168 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.846407 5117 generic.go:358] "Generic (PLEG): container finished" podID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerID="d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.846513 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.346496286 +0000 UTC m=+118.458032176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.847972 5117 generic.go:358] "Generic (PLEG): container finished" podID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerID="2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.849408 5117 generic.go:358] "Generic (PLEG): container finished" podID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerID="13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.850961 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/4.log" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.851380 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.852709 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" exitCode=255 Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.854451 5117 generic.go:358] "Generic (PLEG): container finished" podID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerID="bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.947890 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.948093 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.448042932 +0000 UTC m=+118.559578822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:34 crc kubenswrapper[5117]: I0130 00:12:34.948465 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:34 crc kubenswrapper[5117]: E0130 00:12:34.948866 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.448858625 +0000 UTC m=+118.560394515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: W0130 00:12:35.018705 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96d26479_7c9f_4877_afc4_338863fcdf4d.slice/crio-127ee498696d9f6992d9070b9acfaa22fc589904a54689fdc1cf5882e5662952 WatchSource:0}: Error finding container 127ee498696d9f6992d9070b9acfaa22fc589904a54689fdc1cf5882e5662952: Status 404 returned error can't find the container with id 127ee498696d9f6992d9070b9acfaa22fc589904a54689fdc1cf5882e5662952 Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.049379 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.049617 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.549574488 +0000 UTC m=+118.661110378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.050189 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.050557 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.550541995 +0000 UTC m=+118.662077875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.080247 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-nkcjt" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.080297 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95kbf"] Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.080312 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2hcj"] Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.080530 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.083902 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.100995 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-s2hrs" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101054 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpvcc" event={"ID":"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81","Type":"ContainerDied","Data":"d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101079 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpvcc" event={"ID":"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81","Type":"ContainerStarted","Data":"2b44cc134c1be33631a8d3969d85f043d85f6636f80f987c832026840f7862ce"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101089 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9hmp" event={"ID":"d8e9f7c6-ffd2-40f7-82fa-9fab50710838","Type":"ContainerDied","Data":"2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101105 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26tjl" event={"ID":"fe73bcd6-db8f-4472-a65f-b7858304bc8b","Type":"ContainerDied","Data":"13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101115 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101130 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcw7" event={"ID":"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6","Type":"ContainerDied","Data":"bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101141 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"41c58dcc-05ad-46da-b0c4-aa033ff08da2","Type":"ContainerStarted","Data":"60d4cfce1c257e1dfa8468e3704940c91e72a43b5389bf6a55344f170d28bb56"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.101169 5117 scope.go:117] "RemoveContainer" containerID="d9b5be9f5ba63201b909d182125108fe074ba94ee7bb5d54ec09478479a75948" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.102106 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.102334 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.104508 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:35 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:35 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:35 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.104570 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.151255 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.151465 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.651430553 +0000 UTC m=+118.762966443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.151962 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfbf\" (UniqueName: \"kubernetes.io/projected/c8afa4c5-96fe-4cf5-b8cb-d61786386452-kube-api-access-mcfbf\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.152035 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.152259 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-catalog-content\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.152309 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-utilities\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.152542 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.652519733 +0000 UTC m=+118.764055623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.254785 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.255840 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.755795308 +0000 UTC m=+118.867331198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.256001 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-catalog-content\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.256549 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-utilities\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.256704 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfbf\" (UniqueName: \"kubernetes.io/projected/c8afa4c5-96fe-4cf5-b8cb-d61786386452-kube-api-access-mcfbf\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.256752 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.265796 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-catalog-content\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.266795 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.766760768 +0000 UTC m=+118.878296658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.266827 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-utilities\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.297339 5117 ???:1] "http: TLS handshake error from 192.168.126.11:38146: no serving certificate available for the kubelet" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.306485 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfbf\" (UniqueName: \"kubernetes.io/projected/c8afa4c5-96fe-4cf5-b8cb-d61786386452-kube-api-access-mcfbf\") pod \"redhat-marketplace-95kbf\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.362589 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.362793 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.862743927 +0000 UTC m=+118.974279817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.363108 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.363463 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.863449077 +0000 UTC m=+118.974984957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.445034 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.464620 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.464998 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:35.964975062 +0000 UTC m=+119.076510952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.564461 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p98f5"] Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.566979 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.567387 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.067373422 +0000 UTC m=+119.178909312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.574434 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.578320 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.580769 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p98f5"] Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.669653 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.669873 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.169831544 +0000 UTC m=+119.281367434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.670273 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.670399 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-catalog-content\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.670558 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-utilities\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.670606 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggfp\" (UniqueName: \"kubernetes.io/projected/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-kube-api-access-cggfp\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.670969 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.170959096 +0000 UTC m=+119.282494986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.681341 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.716313 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95kbf"] Jan 30 00:12:35 crc kubenswrapper[5117]: W0130 00:12:35.740640 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa4c5_96fe_4cf5_b8cb_d61786386452.slice/crio-6446d0c8a70f78204e905c92c6ac633ea6829bb377dd20072e3158041e5c5e25 WatchSource:0}: Error finding container 6446d0c8a70f78204e905c92c6ac633ea6829bb377dd20072e3158041e5c5e25: Status 404 returned error can't find the container with id 6446d0c8a70f78204e905c92c6ac633ea6829bb377dd20072e3158041e5c5e25 Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.772467 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.772619 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.272589824 +0000 UTC m=+119.384125714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.772914 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-catalog-content\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.773103 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-utilities\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.773145 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cggfp\" (UniqueName: \"kubernetes.io/projected/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-kube-api-access-cggfp\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.773180 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.773309 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-catalog-content\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.773918 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-utilities\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.774152 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.274143268 +0000 UTC m=+119.385679158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.792426 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggfp\" (UniqueName: \"kubernetes.io/projected/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-kube-api-access-cggfp\") pod \"redhat-operators-p98f5\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.871625 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"41c58dcc-05ad-46da-b0c4-aa033ff08da2","Type":"ContainerStarted","Data":"eff10315cd03376bf524614c36ba60d57fa9e9a9147046c5ef812f02c3c6873a"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.873431 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95kbf" event={"ID":"c8afa4c5-96fe-4cf5-b8cb-d61786386452","Type":"ContainerStarted","Data":"6446d0c8a70f78204e905c92c6ac633ea6829bb377dd20072e3158041e5c5e25"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.873605 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.873884 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.873953 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.874023 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.373993736 +0000 UTC m=+119.485529626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.874123 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.874512 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.374505121 +0000 UTC m=+119.486041011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.874867 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.875951 5117 generic.go:358] "Generic (PLEG): container finished" podID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerID="03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67" exitCode=0 Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.876097 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2hcj" event={"ID":"96d26479-7c9f-4877-afc4-338863fcdf4d","Type":"ContainerDied","Data":"03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.876128 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2hcj" event={"ID":"96d26479-7c9f-4877-afc4-338863fcdf4d","Type":"ContainerStarted","Data":"127ee498696d9f6992d9070b9acfaa22fc589904a54689fdc1cf5882e5662952"} Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.879663 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/4.log" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.879920 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.888876 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.8888498460000003 podStartE2EDuration="2.888849846s" podCreationTimestamp="2026-01-30 00:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:35.886251902 +0000 UTC m=+118.997787802" watchObservedRunningTime="2026-01-30 00:12:35.888849846 +0000 UTC m=+119.000385736" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.918930 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.926329 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.965323 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.975362 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l95kd"] Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.975834 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.976022 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.475989115 +0000 UTC m=+119.587525005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.976179 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:35 crc kubenswrapper[5117]: E0130 00:12:35.977419 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.477403045 +0000 UTC m=+119.588939005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.980624 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:35 crc kubenswrapper[5117]: I0130 00:12:35.994062 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l95kd"] Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.077527 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.077851 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-utilities\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.077882 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4cpv\" (UniqueName: \"kubernetes.io/projected/d4202452-295a-4f89-bc23-cdbf6c271f02-kube-api-access-f4cpv\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.077906 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-catalog-content\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.078608 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.578584541 +0000 UTC m=+119.690120431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.089532 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:36 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:36 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:36 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.089608 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.184108 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-utilities\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.184171 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.184205 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4cpv\" (UniqueName: \"kubernetes.io/projected/d4202452-295a-4f89-bc23-cdbf6c271f02-kube-api-access-f4cpv\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.184233 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-catalog-content\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.184602 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-utilities\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.184999 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.684980314 +0000 UTC m=+119.796516204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.185443 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-catalog-content\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.203400 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4cpv\" (UniqueName: \"kubernetes.io/projected/d4202452-295a-4f89-bc23-cdbf6c271f02-kube-api-access-f4cpv\") pod \"redhat-operators-l95kd\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.221360 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p98f5"] Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.285651 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.285868 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.7858312 +0000 UTC m=+119.897367090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.285949 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.286339 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.786324504 +0000 UTC m=+119.897860394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.331166 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.387666 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.388093 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.888069135 +0000 UTC m=+119.999605025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.388186 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.388763 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.888755475 +0000 UTC m=+120.000291365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.491569 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.494835 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:36.994807888 +0000 UTC m=+120.106343768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.600951 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.601450 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:37.101427827 +0000 UTC m=+120.212963717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.684105 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-mmnjm" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.702395 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.702575 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:37.202537231 +0000 UTC m=+120.314073121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.703084 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.703492 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:37.203475928 +0000 UTC m=+120.315011818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.767868 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-mq4qt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.767941 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mq4qt" podUID="e161fe62-f260-4253-a91c-00d71e12cd51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.777933 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.807353 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.808654 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:37.308635356 +0000 UTC m=+120.420171246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.813374 5117 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.902581 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"152031156e7d4db31d2456417ff5ea64618e488ce0b2702157ea8d8bf65b6340"} Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.902641 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"aa79066a349ccf393d1d55fb2a882107d57687ef546e649e34b7c91c2315fa0c"} Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.911573 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l95kd"] Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.911724 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:36 crc kubenswrapper[5117]: E0130 00:12:36.912143 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:37.412124176 +0000 UTC m=+120.523660066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.926404 5117 generic.go:358] "Generic (PLEG): container finished" podID="41c58dcc-05ad-46da-b0c4-aa033ff08da2" containerID="eff10315cd03376bf524614c36ba60d57fa9e9a9147046c5ef812f02c3c6873a" exitCode=0 Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.926559 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"41c58dcc-05ad-46da-b0c4-aa033ff08da2","Type":"ContainerDied","Data":"eff10315cd03376bf524614c36ba60d57fa9e9a9147046c5ef812f02c3c6873a"} Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.947571 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" event={"ID":"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe","Type":"ContainerStarted","Data":"24aabf8f9213c984d458ec83a3328861e5b4a4ea2e6cf668e88acbb1d4df6a1f"} Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.952517 5117 generic.go:358] "Generic (PLEG): container finished" podID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerID="c85453f8953d85a9a144261d97e1d225b4489bb64808095ab3814b05e68adf95" exitCode=0 Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.952612 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95kbf" event={"ID":"c8afa4c5-96fe-4cf5-b8cb-d61786386452","Type":"ContainerDied","Data":"c85453f8953d85a9a144261d97e1d225b4489bb64808095ab3814b05e68adf95"} Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.957076 5117 generic.go:358] "Generic (PLEG): container finished" podID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerID="d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e" exitCode=0 Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.957161 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p98f5" event={"ID":"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6","Type":"ContainerDied","Data":"d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e"} Jan 30 00:12:36 crc kubenswrapper[5117]: I0130 00:12:36.957186 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p98f5" event={"ID":"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6","Type":"ContainerStarted","Data":"a6643befd0ee3808a34fe945c9b3bbcb792b8c4912973ea63e1b6c2978e9785b"} Jan 30 00:12:36 crc kubenswrapper[5117]: W0130 00:12:36.964728 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4202452_295a_4f89_bc23_cdbf6c271f02.slice/crio-d4dd41a8f1b12f4e23480bf4f5aa9b6352d61f9a2d431d27b98ce0bcf6595a6c WatchSource:0}: Error finding container d4dd41a8f1b12f4e23480bf4f5aa9b6352d61f9a2d431d27b98ce0bcf6595a6c: Status 404 returned error can't find the container with id d4dd41a8f1b12f4e23480bf4f5aa9b6352d61f9a2d431d27b98ce0bcf6595a6c Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.013783 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:37 crc kubenswrapper[5117]: E0130 00:12:37.015585 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:37.515553926 +0000 UTC m=+120.627089816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.094383 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:37 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:37 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:37 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.094910 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.117985 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:37 crc kubenswrapper[5117]: E0130 00:12:37.118350 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:37.618335737 +0000 UTC m=+120.729871627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-ndwrw" (UID: "9e140562-67a0-4a82-bfab-c678258c734e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.138138 5117 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T00:12:36.81339431Z","UUID":"076c16ff-8744-4615-912b-36fdf076876d","Handler":null,"Name":"","Endpoint":""} Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.141596 5117 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.141634 5117 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.219128 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.228406 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.320760 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.342985 5117 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.343048 5117 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.381814 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-ndwrw\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.661431 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.882474 5117 ???:1] "http: TLS handshake error from 192.168.126.11:44044: no serving certificate available for the kubelet" Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.885116 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ndwrw"] Jan 30 00:12:37 crc kubenswrapper[5117]: W0130 00:12:37.901522 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e140562_67a0_4a82_bfab_c678258c734e.slice/crio-e89c60a11c9d29bdf29e935d883c8d9b80212236b8832dd1985c17a25e3d67bf WatchSource:0}: Error finding container e89c60a11c9d29bdf29e935d883c8d9b80212236b8832dd1985c17a25e3d67bf: Status 404 returned error can't find the container with id e89c60a11c9d29bdf29e935d883c8d9b80212236b8832dd1985c17a25e3d67bf Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.972377 5117 generic.go:358] "Generic (PLEG): container finished" podID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerID="b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354" exitCode=0 Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.973537 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l95kd" event={"ID":"d4202452-295a-4f89-bc23-cdbf6c271f02","Type":"ContainerDied","Data":"b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354"} Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.973619 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l95kd" event={"ID":"d4202452-295a-4f89-bc23-cdbf6c271f02","Type":"ContainerStarted","Data":"d4dd41a8f1b12f4e23480bf4f5aa9b6352d61f9a2d431d27b98ce0bcf6595a6c"} Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.991102 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" event={"ID":"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe","Type":"ContainerStarted","Data":"8e78992ac8e402ab4be239cb74a4784fa6b258fee8d4e2a87638c6f6e7263a2e"} Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.991213 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" event={"ID":"4f4cf379-6e53-4fc8-8527-4e80b9aaccbe","Type":"ContainerStarted","Data":"34084e9a3021e1cb2a881ae3aa2d3eb5aef4520b60418f1f76218644cf98a185"} Jan 30 00:12:37 crc kubenswrapper[5117]: I0130 00:12:37.998280 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" event={"ID":"9e140562-67a0-4a82-bfab-c678258c734e","Type":"ContainerStarted","Data":"e89c60a11c9d29bdf29e935d883c8d9b80212236b8832dd1985c17a25e3d67bf"} Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.019521 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-c28gc" podStartSLOduration=19.01948835 podStartE2EDuration="19.01948835s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:38.011576586 +0000 UTC m=+121.123112486" watchObservedRunningTime="2026-01-30 00:12:38.01948835 +0000 UTC m=+121.131024240" Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.092453 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:38 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:38 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:38 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.092518 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.331298 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.435480 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kube-api-access\") pod \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.435576 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kubelet-dir\") pod \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\" (UID: \"41c58dcc-05ad-46da-b0c4-aa033ff08da2\") " Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.435867 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "41c58dcc-05ad-46da-b0c4-aa033ff08da2" (UID: "41c58dcc-05ad-46da-b0c4-aa033ff08da2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.450615 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "41c58dcc-05ad-46da-b0c4-aa033ff08da2" (UID: "41c58dcc-05ad-46da-b0c4-aa033ff08da2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.536890 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:38 crc kubenswrapper[5117]: I0130 00:12:38.536928 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41c58dcc-05ad-46da-b0c4-aa033ff08da2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.007232 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.007262 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"41c58dcc-05ad-46da-b0c4-aa033ff08da2","Type":"ContainerDied","Data":"60d4cfce1c257e1dfa8468e3704940c91e72a43b5389bf6a55344f170d28bb56"} Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.007311 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d4cfce1c257e1dfa8468e3704940c91e72a43b5389bf6a55344f170d28bb56" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.011155 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" event={"ID":"9e140562-67a0-4a82-bfab-c678258c734e","Type":"ContainerStarted","Data":"581534ae07887e61de6c967a181171dd3f79c7ac639636656b7f8480a6fa3541"} Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.011716 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.034461 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" podStartSLOduration=98.034443736 podStartE2EDuration="1m38.034443736s" podCreationTimestamp="2026-01-30 00:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:39.030093723 +0000 UTC m=+122.141629623" watchObservedRunningTime="2026-01-30 00:12:39.034443736 +0000 UTC m=+122.145979626" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.060671 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.088286 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:39 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:39 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:39 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.088364 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.099968 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vxvgr" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.444429 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.462454 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-scnb9" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.780342 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.781337 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8x2r4" Jan 30 00:12:39 crc kubenswrapper[5117]: I0130 00:12:39.785940 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fw9pw" Jan 30 00:12:40 crc kubenswrapper[5117]: I0130 00:12:40.088333 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:40 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:40 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:40 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:40 crc kubenswrapper[5117]: I0130 00:12:40.088505 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:40 crc kubenswrapper[5117]: I0130 00:12:40.759633 5117 patch_prober.go:28] interesting pod/console-64d44f6ddf-dvncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 00:12:40 crc kubenswrapper[5117]: I0130 00:12:40.759734 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-dvncc" podUID="3c09a221-05c5-4aa7-a59f-7501885dd323" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 00:12:40 crc kubenswrapper[5117]: I0130 00:12:40.778777 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-mq4qt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 00:12:40 crc kubenswrapper[5117]: I0130 00:12:40.778862 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mq4qt" podUID="e161fe62-f260-4253-a91c-00d71e12cd51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 00:12:40 crc kubenswrapper[5117]: E0130 00:12:40.794292 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:40 crc kubenswrapper[5117]: E0130 00:12:40.797075 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:40 crc kubenswrapper[5117]: E0130 00:12:40.798971 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:40 crc kubenswrapper[5117]: E0130 00:12:40.799007 5117 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:41 crc kubenswrapper[5117]: I0130 00:12:41.073523 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:41 crc kubenswrapper[5117]: I0130 00:12:41.074419 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:12:41 crc kubenswrapper[5117]: E0130 00:12:41.074826 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:41 crc kubenswrapper[5117]: I0130 00:12:41.088408 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:41 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:41 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:41 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:41 crc kubenswrapper[5117]: I0130 00:12:41.088510 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:41 crc kubenswrapper[5117]: I0130 00:12:41.827601 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2rpjg" Jan 30 00:12:42 crc kubenswrapper[5117]: I0130 00:12:42.087813 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:42 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:42 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:42 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:42 crc kubenswrapper[5117]: I0130 00:12:42.087925 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:43 crc kubenswrapper[5117]: I0130 00:12:43.027818 5117 ???:1] "http: TLS handshake error from 192.168.126.11:44046: no serving certificate available for the kubelet" Jan 30 00:12:43 crc kubenswrapper[5117]: I0130 00:12:43.118604 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:43 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:43 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:43 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:43 crc kubenswrapper[5117]: I0130 00:12:43.118731 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:44 crc kubenswrapper[5117]: I0130 00:12:44.088026 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:44 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:44 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:44 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:44 crc kubenswrapper[5117]: I0130 00:12:44.089550 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:45 crc kubenswrapper[5117]: I0130 00:12:45.087401 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-2rttq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:45 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 30 00:12:45 crc kubenswrapper[5117]: [+]process-running ok Jan 30 00:12:45 crc kubenswrapper[5117]: healthz check failed Jan 30 00:12:45 crc kubenswrapper[5117]: I0130 00:12:45.087480 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" podUID="e148c5fe-c209-4e41-82bb-aa78a79c0d66" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:45 crc kubenswrapper[5117]: I0130 00:12:45.385570 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:12:46 crc kubenswrapper[5117]: I0130 00:12:46.091569 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:46 crc kubenswrapper[5117]: I0130 00:12:46.095733 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-2rttq" Jan 30 00:12:46 crc kubenswrapper[5117]: I0130 00:12:46.770785 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-mq4qt" Jan 30 00:12:49 crc kubenswrapper[5117]: I0130 00:12:49.290254 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g7kqs"] Jan 30 00:12:49 crc kubenswrapper[5117]: I0130 00:12:49.290925 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" containerName="controller-manager" containerID="cri-o://6e653134294a876171722b23a25dca9f7839fa891b824b3b44f5a10bade30a4c" gracePeriod=30 Jan 30 00:12:49 crc kubenswrapper[5117]: I0130 00:12:49.302540 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb"] Jan 30 00:12:49 crc kubenswrapper[5117]: I0130 00:12:49.303128 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerName="route-controller-manager" containerID="cri-o://632bd71ed6200d8c3c063f866e29264eed700687cda02c2f2944bda4f747ede5" gracePeriod=30 Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.086955 5117 generic.go:358] "Generic (PLEG): container finished" podID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerID="632bd71ed6200d8c3c063f866e29264eed700687cda02c2f2944bda4f747ede5" exitCode=0 Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.087064 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" event={"ID":"f1cd991b-8078-45cb-9591-ae3f5a4d4db4","Type":"ContainerDied","Data":"632bd71ed6200d8c3c063f866e29264eed700687cda02c2f2944bda4f747ede5"} Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.088677 5117 generic.go:358] "Generic (PLEG): container finished" podID="eb191c78-b1b1-4b69-b609-210416eb3356" containerID="6e653134294a876171722b23a25dca9f7839fa891b824b3b44f5a10bade30a4c" exitCode=0 Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.088728 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" event={"ID":"eb191c78-b1b1-4b69-b609-210416eb3356","Type":"ContainerDied","Data":"6e653134294a876171722b23a25dca9f7839fa891b824b3b44f5a10bade30a4c"} Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.655381 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.700307 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz"] Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.700892 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerName="route-controller-manager" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.700910 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerName="route-controller-manager" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.700941 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41c58dcc-05ad-46da-b0c4-aa033ff08da2" containerName="pruner" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.700947 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c58dcc-05ad-46da-b0c4-aa033ff08da2" containerName="pruner" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.701040 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="41c58dcc-05ad-46da-b0c4-aa033ff08da2" containerName="pruner" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.701052 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" containerName="route-controller-manager" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.713213 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-config\") pod \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.713300 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fv4g\" (UniqueName: \"kubernetes.io/projected/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-kube-api-access-9fv4g\") pod \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.714285 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-config" (OuterVolumeSpecName: "config") pod "f1cd991b-8078-45cb-9591-ae3f5a4d4db4" (UID: "f1cd991b-8078-45cb-9591-ae3f5a4d4db4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.730719 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-kube-api-access-9fv4g" (OuterVolumeSpecName: "kube-api-access-9fv4g") pod "f1cd991b-8078-45cb-9591-ae3f5a4d4db4" (UID: "f1cd991b-8078-45cb-9591-ae3f5a4d4db4"). InnerVolumeSpecName "kube-api-access-9fv4g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.732816 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz"] Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.733198 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.785556 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:50 crc kubenswrapper[5117]: E0130 00:12:50.796364 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.796391 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-dvncc" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.818059 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-client-ca\") pod \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.818120 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-serving-cert\") pod \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.818304 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-tmp\") pod \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\" (UID: \"f1cd991b-8078-45cb-9591-ae3f5a4d4db4\") " Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.818606 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.818632 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9fv4g\" (UniqueName: \"kubernetes.io/projected/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-kube-api-access-9fv4g\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.819016 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-tmp" (OuterVolumeSpecName: "tmp") pod "f1cd991b-8078-45cb-9591-ae3f5a4d4db4" (UID: "f1cd991b-8078-45cb-9591-ae3f5a4d4db4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.820106 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-client-ca" (OuterVolumeSpecName: "client-ca") pod "f1cd991b-8078-45cb-9591-ae3f5a4d4db4" (UID: "f1cd991b-8078-45cb-9591-ae3f5a4d4db4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.828828 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f1cd991b-8078-45cb-9591-ae3f5a4d4db4" (UID: "f1cd991b-8078-45cb-9591-ae3f5a4d4db4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:50 crc kubenswrapper[5117]: E0130 00:12:50.831744 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:50 crc kubenswrapper[5117]: E0130 00:12:50.838052 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:50 crc kubenswrapper[5117]: E0130 00:12:50.838120 5117 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.919653 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-config\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.919733 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fb34a2-9d65-4ba9-aae4-9697cb736b01-tmp\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.919956 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fb34a2-9d65-4ba9-aae4-9697cb736b01-serving-cert\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.920132 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-client-ca\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.920558 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktmd\" (UniqueName: \"kubernetes.io/projected/42fb34a2-9d65-4ba9-aae4-9697cb736b01-kube-api-access-qktmd\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.920845 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.920954 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:50 crc kubenswrapper[5117]: I0130 00:12:50.920986 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cd991b-8078-45cb-9591-ae3f5a4d4db4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.022049 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fb34a2-9d65-4ba9-aae4-9697cb736b01-tmp\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.022126 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fb34a2-9d65-4ba9-aae4-9697cb736b01-serving-cert\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.022173 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-client-ca\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.022224 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qktmd\" (UniqueName: \"kubernetes.io/projected/42fb34a2-9d65-4ba9-aae4-9697cb736b01-kube-api-access-qktmd\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.022299 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-config\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.022834 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fb34a2-9d65-4ba9-aae4-9697cb736b01-tmp\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.023371 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-client-ca\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.024408 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-config\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.028934 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fb34a2-9d65-4ba9-aae4-9697cb736b01-serving-cert\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.038069 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qktmd\" (UniqueName: \"kubernetes.io/projected/42fb34a2-9d65-4ba9-aae4-9697cb736b01-kube-api-access-qktmd\") pod \"route-controller-manager-599dc665bd-t9mpz\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.095298 5117 generic.go:358] "Generic (PLEG): container finished" podID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerID="845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d" exitCode=0 Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.095378 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9hmp" event={"ID":"d8e9f7c6-ffd2-40f7-82fa-9fab50710838","Type":"ContainerDied","Data":"845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.103502 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26tjl" event={"ID":"fe73bcd6-db8f-4472-a65f-b7858304bc8b","Type":"ContainerStarted","Data":"eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.106234 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.106287 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb" event={"ID":"f1cd991b-8078-45cb-9591-ae3f5a4d4db4","Type":"ContainerDied","Data":"9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.106335 5117 scope.go:117] "RemoveContainer" containerID="632bd71ed6200d8c3c063f866e29264eed700687cda02c2f2944bda4f747ede5" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.108295 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcw7" event={"ID":"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6","Type":"ContainerStarted","Data":"e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.110010 5117 generic.go:358] "Generic (PLEG): container finished" podID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerID="d6b80db46aee6e6c0d623048b017742bf66cb7a0562173d5ec24bf01a9bd0c0e" exitCode=0 Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.110126 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95kbf" event={"ID":"c8afa4c5-96fe-4cf5-b8cb-d61786386452","Type":"ContainerDied","Data":"d6b80db46aee6e6c0d623048b017742bf66cb7a0562173d5ec24bf01a9bd0c0e"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.112731 5117 generic.go:358] "Generic (PLEG): container finished" podID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerID="9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e" exitCode=0 Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.112869 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2hcj" event={"ID":"96d26479-7c9f-4877-afc4-338863fcdf4d","Type":"ContainerDied","Data":"9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.124635 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" event={"ID":"eb191c78-b1b1-4b69-b609-210416eb3356","Type":"ContainerDied","Data":"e76baec32ee4694a878130ac5c59a178acbac85a0f06a4e1b6ca8abed52ecc60"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.124675 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e76baec32ee4694a878130ac5c59a178acbac85a0f06a4e1b6ca8abed52ecc60" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.125634 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.129798 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpvcc" event={"ID":"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81","Type":"ContainerStarted","Data":"2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069"} Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.183952 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb"] Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.186330 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-cgnvb"] Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.205056 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.225455 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb191c78-b1b1-4b69-b609-210416eb3356-serving-cert\") pod \"eb191c78-b1b1-4b69-b609-210416eb3356\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.225515 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-proxy-ca-bundles\") pod \"eb191c78-b1b1-4b69-b609-210416eb3356\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.225539 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-client-ca\") pod \"eb191c78-b1b1-4b69-b609-210416eb3356\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.225563 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbkm8\" (UniqueName: \"kubernetes.io/projected/eb191c78-b1b1-4b69-b609-210416eb3356-kube-api-access-zbkm8\") pod \"eb191c78-b1b1-4b69-b609-210416eb3356\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.225608 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb191c78-b1b1-4b69-b609-210416eb3356-tmp\") pod \"eb191c78-b1b1-4b69-b609-210416eb3356\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.225669 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-config\") pod \"eb191c78-b1b1-4b69-b609-210416eb3356\" (UID: \"eb191c78-b1b1-4b69-b609-210416eb3356\") " Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.226771 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb191c78-b1b1-4b69-b609-210416eb3356" (UID: "eb191c78-b1b1-4b69-b609-210416eb3356"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.227101 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.228089 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb191c78-b1b1-4b69-b609-210416eb3356-tmp" (OuterVolumeSpecName: "tmp") pod "eb191c78-b1b1-4b69-b609-210416eb3356" (UID: "eb191c78-b1b1-4b69-b609-210416eb3356"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.228352 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eb191c78-b1b1-4b69-b609-210416eb3356" (UID: "eb191c78-b1b1-4b69-b609-210416eb3356"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.228538 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-config" (OuterVolumeSpecName: "config") pod "eb191c78-b1b1-4b69-b609-210416eb3356" (UID: "eb191c78-b1b1-4b69-b609-210416eb3356"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.230754 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb191c78-b1b1-4b69-b609-210416eb3356-kube-api-access-zbkm8" (OuterVolumeSpecName: "kube-api-access-zbkm8") pod "eb191c78-b1b1-4b69-b609-210416eb3356" (UID: "eb191c78-b1b1-4b69-b609-210416eb3356"). InnerVolumeSpecName "kube-api-access-zbkm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.231280 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb191c78-b1b1-4b69-b609-210416eb3356-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb191c78-b1b1-4b69-b609-210416eb3356" (UID: "eb191c78-b1b1-4b69-b609-210416eb3356"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.327709 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbkm8\" (UniqueName: \"kubernetes.io/projected/eb191c78-b1b1-4b69-b609-210416eb3356-kube-api-access-zbkm8\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.328061 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eb191c78-b1b1-4b69-b609-210416eb3356-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.328072 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.328079 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb191c78-b1b1-4b69-b609-210416eb3356-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:51 crc kubenswrapper[5117]: I0130 00:12:51.328090 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb191c78-b1b1-4b69-b609-210416eb3356-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.141893 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l95kd" event={"ID":"d4202452-295a-4f89-bc23-cdbf6c271f02","Type":"ContainerStarted","Data":"956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea"} Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.142304 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7888c87fc5-727nr"] Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.143513 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" containerName="controller-manager" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.143532 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" containerName="controller-manager" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.143743 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" containerName="controller-manager" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.145010 5117 generic.go:358] "Generic (PLEG): container finished" podID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerID="eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a" exitCode=0 Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.148394 5117 generic.go:358] "Generic (PLEG): container finished" podID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerID="e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e" exitCode=0 Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.150329 5117 generic.go:358] "Generic (PLEG): container finished" podID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerID="33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f" exitCode=0 Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.155136 5117 generic.go:358] "Generic (PLEG): container finished" podID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerID="2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069" exitCode=0 Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261585 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26tjl" event={"ID":"fe73bcd6-db8f-4472-a65f-b7858304bc8b","Type":"ContainerDied","Data":"eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a"} Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261754 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7888c87fc5-727nr"] Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261852 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcw7" event={"ID":"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6","Type":"ContainerDied","Data":"e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e"} Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261911 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz"] Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261931 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p98f5" event={"ID":"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6","Type":"ContainerDied","Data":"33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f"} Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261952 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" event={"ID":"42fb34a2-9d65-4ba9-aae4-9697cb736b01","Type":"ContainerStarted","Data":"30f61c78c94d52dbf6e34a0d9803041c0ebfc94aa4923c0d89cad6abd2b3ea58"} Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261964 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2hcj" event={"ID":"96d26479-7c9f-4877-afc4-338863fcdf4d","Type":"ContainerStarted","Data":"b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905"} Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.261975 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpvcc" event={"ID":"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81","Type":"ContainerDied","Data":"2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069"} Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.262293 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.262531 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-g7kqs" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.340910 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22ws\" (UniqueName: \"kubernetes.io/projected/7ca6f152-160f-4292-9590-0950b8efbc34-kube-api-access-l22ws\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.340992 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-proxy-ca-bundles\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.341300 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca6f152-160f-4292-9590-0950b8efbc34-serving-cert\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.341362 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-config\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.341609 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7ca6f152-160f-4292-9590-0950b8efbc34-tmp\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.341946 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-client-ca\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.361030 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g7kqs"] Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.366581 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-g7kqs"] Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.444058 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-proxy-ca-bundles\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.444185 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca6f152-160f-4292-9590-0950b8efbc34-serving-cert\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.444232 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-config\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.444294 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7ca6f152-160f-4292-9590-0950b8efbc34-tmp\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.444347 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-client-ca\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.444413 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l22ws\" (UniqueName: \"kubernetes.io/projected/7ca6f152-160f-4292-9590-0950b8efbc34-kube-api-access-l22ws\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.445597 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7ca6f152-160f-4292-9590-0950b8efbc34-tmp\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.445779 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-proxy-ca-bundles\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.445799 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-client-ca\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.446296 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-config\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.457056 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca6f152-160f-4292-9590-0950b8efbc34-serving-cert\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.466071 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22ws\" (UniqueName: \"kubernetes.io/projected/7ca6f152-160f-4292-9590-0950b8efbc34-kube-api-access-l22ws\") pod \"controller-manager-7888c87fc5-727nr\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.611526 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:52 crc kubenswrapper[5117]: I0130 00:12:52.716327 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:12:52 crc kubenswrapper[5117]: E0130 00:12:52.717129 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.171185 5117 generic.go:358] "Generic (PLEG): container finished" podID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerID="956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea" exitCode=0 Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.198154 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb191c78-b1b1-4b69-b609-210416eb3356" path="/var/lib/kubelet/pods/eb191c78-b1b1-4b69-b609-210416eb3356/volumes" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199088 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cd991b-8078-45cb-9591-ae3f5a4d4db4" path="/var/lib/kubelet/pods/f1cd991b-8078-45cb-9591-ae3f5a4d4db4/volumes" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199475 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95kbf" event={"ID":"c8afa4c5-96fe-4cf5-b8cb-d61786386452","Type":"ContainerStarted","Data":"0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121"} Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199659 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199679 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7888c87fc5-727nr"] Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199704 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" event={"ID":"42fb34a2-9d65-4ba9-aae4-9697cb736b01","Type":"ContainerStarted","Data":"a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d"} Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199714 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpvcc" event={"ID":"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81","Type":"ContainerStarted","Data":"e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f"} Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199730 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l95kd" event={"ID":"d4202452-295a-4f89-bc23-cdbf6c271f02","Type":"ContainerDied","Data":"956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea"} Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199743 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9hmp" event={"ID":"d8e9f7c6-ffd2-40f7-82fa-9fab50710838","Type":"ContainerStarted","Data":"24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a"} Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199755 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" event={"ID":"7ca6f152-160f-4292-9590-0950b8efbc34","Type":"ContainerStarted","Data":"b75f570640ff71b1357c119ec5ee078c417015bdf9f36754bf1efb445472f9f9"} Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.199764 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26tjl" event={"ID":"fe73bcd6-db8f-4472-a65f-b7858304bc8b","Type":"ContainerStarted","Data":"b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828"} Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.222166 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-95kbf" podStartSLOduration=5.776739814 podStartE2EDuration="19.222139295s" podCreationTimestamp="2026-01-30 00:12:34 +0000 UTC" firstStartedPulling="2026-01-30 00:12:36.953440663 +0000 UTC m=+120.064976553" lastFinishedPulling="2026-01-30 00:12:50.398840104 +0000 UTC m=+133.510376034" observedRunningTime="2026-01-30 00:12:53.22196196 +0000 UTC m=+136.333497850" watchObservedRunningTime="2026-01-30 00:12:53.222139295 +0000 UTC m=+136.333675195" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.243396 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" podStartSLOduration=4.24332534 podStartE2EDuration="4.24332534s" podCreationTimestamp="2026-01-30 00:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:53.240563702 +0000 UTC m=+136.352099602" watchObservedRunningTime="2026-01-30 00:12:53.24332534 +0000 UTC m=+136.354861230" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.267794 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b9hmp" podStartSLOduration=5.919739879 podStartE2EDuration="21.267778866s" podCreationTimestamp="2026-01-30 00:12:32 +0000 UTC" firstStartedPulling="2026-01-30 00:12:35.085575804 +0000 UTC m=+118.197111694" lastFinishedPulling="2026-01-30 00:12:50.433614791 +0000 UTC m=+133.545150681" observedRunningTime="2026-01-30 00:12:53.267467447 +0000 UTC m=+136.379003377" watchObservedRunningTime="2026-01-30 00:12:53.267778866 +0000 UTC m=+136.379314756" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.295423 5117 ???:1] "http: TLS handshake error from 192.168.126.11:33634: no serving certificate available for the kubelet" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.427604 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.427646 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.495377 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x2hcj" podStartSLOduration=5.046590791 podStartE2EDuration="19.495362074s" podCreationTimestamp="2026-01-30 00:12:34 +0000 UTC" firstStartedPulling="2026-01-30 00:12:35.876726893 +0000 UTC m=+118.988262773" lastFinishedPulling="2026-01-30 00:12:50.325498156 +0000 UTC m=+133.437034056" observedRunningTime="2026-01-30 00:12:53.282239412 +0000 UTC m=+136.393775312" watchObservedRunningTime="2026-01-30 00:12:53.495362074 +0000 UTC m=+136.606897964" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.497340 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-26tjl" podStartSLOduration=6.181979282 podStartE2EDuration="21.49733185s" podCreationTimestamp="2026-01-30 00:12:32 +0000 UTC" firstStartedPulling="2026-01-30 00:12:35.088112986 +0000 UTC m=+118.199648876" lastFinishedPulling="2026-01-30 00:12:50.403465544 +0000 UTC m=+133.515001444" observedRunningTime="2026-01-30 00:12:53.493834342 +0000 UTC m=+136.605370252" watchObservedRunningTime="2026-01-30 00:12:53.49733185 +0000 UTC m=+136.608867740" Jan 30 00:12:53 crc kubenswrapper[5117]: I0130 00:12:53.842793 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.186251 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" event={"ID":"7ca6f152-160f-4292-9590-0950b8efbc34","Type":"ContainerStarted","Data":"c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0"} Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.186488 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.192195 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcw7" event={"ID":"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6","Type":"ContainerStarted","Data":"0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246"} Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.194741 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p98f5" event={"ID":"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6","Type":"ContainerStarted","Data":"0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98"} Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.196895 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l95kd" event={"ID":"d4202452-295a-4f89-bc23-cdbf6c271f02","Type":"ContainerStarted","Data":"b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011"} Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.212350 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" podStartSLOduration=5.21232475 podStartE2EDuration="5.21232475s" podCreationTimestamp="2026-01-30 00:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:54.205755536 +0000 UTC m=+137.317291436" watchObservedRunningTime="2026-01-30 00:12:54.21232475 +0000 UTC m=+137.323860640" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.231332 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hpvcc" podStartSLOduration=6.884806119 podStartE2EDuration="22.231310683s" podCreationTimestamp="2026-01-30 00:12:32 +0000 UTC" firstStartedPulling="2026-01-30 00:12:35.08578992 +0000 UTC m=+118.197325820" lastFinishedPulling="2026-01-30 00:12:50.432294494 +0000 UTC m=+133.543830384" observedRunningTime="2026-01-30 00:12:54.22587438 +0000 UTC m=+137.337410290" watchObservedRunningTime="2026-01-30 00:12:54.231310683 +0000 UTC m=+137.342846573" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.255028 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p98f5" podStartSLOduration=5.778582966 podStartE2EDuration="19.255003678s" podCreationTimestamp="2026-01-30 00:12:35 +0000 UTC" firstStartedPulling="2026-01-30 00:12:36.958049823 +0000 UTC m=+120.069585713" lastFinishedPulling="2026-01-30 00:12:50.434470535 +0000 UTC m=+133.546006425" observedRunningTime="2026-01-30 00:12:54.252070706 +0000 UTC m=+137.363606606" watchObservedRunningTime="2026-01-30 00:12:54.255003678 +0000 UTC m=+137.366539568" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.282002 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nfcw7" podStartSLOduration=6.9295970669999996 podStartE2EDuration="22.281977845s" podCreationTimestamp="2026-01-30 00:12:32 +0000 UTC" firstStartedPulling="2026-01-30 00:12:35.08471256 +0000 UTC m=+118.196248450" lastFinishedPulling="2026-01-30 00:12:50.437093348 +0000 UTC m=+133.548629228" observedRunningTime="2026-01-30 00:12:54.278430936 +0000 UTC m=+137.389966846" watchObservedRunningTime="2026-01-30 00:12:54.281977845 +0000 UTC m=+137.393513735" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.286403 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.749864 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.749921 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:12:54 crc kubenswrapper[5117]: I0130 00:12:54.934321 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.074389 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b9hmp" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="registry-server" probeResult="failure" output=< Jan 30 00:12:55 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 30 00:12:55 crc kubenswrapper[5117]: > Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.222976 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l95kd" podStartSLOduration=7.625357669 podStartE2EDuration="20.222961848s" podCreationTimestamp="2026-01-30 00:12:35 +0000 UTC" firstStartedPulling="2026-01-30 00:12:37.974586462 +0000 UTC m=+121.086122352" lastFinishedPulling="2026-01-30 00:12:50.572190641 +0000 UTC m=+133.683726531" observedRunningTime="2026-01-30 00:12:55.222107214 +0000 UTC m=+138.333643104" watchObservedRunningTime="2026-01-30 00:12:55.222961848 +0000 UTC m=+138.334497738" Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.446042 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.446386 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.521381 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.806152 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-x2hcj" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="registry-server" probeResult="failure" output=< Jan 30 00:12:55 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 30 00:12:55 crc kubenswrapper[5117]: > Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.920068 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:55 crc kubenswrapper[5117]: I0130 00:12:55.920141 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:12:56 crc kubenswrapper[5117]: I0130 00:12:56.331894 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:56 crc kubenswrapper[5117]: I0130 00:12:56.331943 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:12:56 crc kubenswrapper[5117]: I0130 00:12:56.959266 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p98f5" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="registry-server" probeResult="failure" output=< Jan 30 00:12:56 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 30 00:12:56 crc kubenswrapper[5117]: > Jan 30 00:12:57 crc kubenswrapper[5117]: I0130 00:12:57.258057 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:12:57 crc kubenswrapper[5117]: I0130 00:12:57.377281 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l95kd" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="registry-server" probeResult="failure" output=< Jan 30 00:12:57 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 30 00:12:57 crc kubenswrapper[5117]: > Jan 30 00:13:00 crc kubenswrapper[5117]: I0130 00:13:00.021506 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:13:00 crc kubenswrapper[5117]: I0130 00:13:00.329069 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95kbf"] Jan 30 00:13:00 crc kubenswrapper[5117]: I0130 00:13:00.329632 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-95kbf" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="registry-server" containerID="cri-o://0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121" gracePeriod=2 Jan 30 00:13:00 crc kubenswrapper[5117]: E0130 00:13:00.792658 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:13:00 crc kubenswrapper[5117]: E0130 00:13:00.794206 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:13:00 crc kubenswrapper[5117]: E0130 00:13:00.796302 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:13:00 crc kubenswrapper[5117]: E0130 00:13:00.796346 5117 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:13:01 crc kubenswrapper[5117]: I0130 00:13:01.807950 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-xkn89" Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.242293 5117 generic.go:358] "Generic (PLEG): container finished" podID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerID="0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121" exitCode=0 Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.242328 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95kbf" event={"ID":"c8afa4c5-96fe-4cf5-b8cb-d61786386452","Type":"ContainerDied","Data":"0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121"} Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.884510 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.885874 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.895935 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.896007 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.942350 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:13:02 crc kubenswrapper[5117]: I0130 00:13:02.944001 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.202019 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.202085 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.241057 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.287944 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.300632 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.305166 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.463720 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:13:03 crc kubenswrapper[5117]: I0130 00:13:03.511169 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:13:04 crc kubenswrapper[5117]: I0130 00:13:04.530888 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9hmp"] Jan 30 00:13:04 crc kubenswrapper[5117]: I0130 00:13:04.824202 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:13:04 crc kubenswrapper[5117]: I0130 00:13:04.892874 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.128853 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpvcc"] Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.267028 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95kbf" event={"ID":"c8afa4c5-96fe-4cf5-b8cb-d61786386452","Type":"ContainerDied","Data":"6446d0c8a70f78204e905c92c6ac633ea6829bb377dd20072e3158041e5c5e25"} Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.267109 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6446d0c8a70f78204e905c92c6ac633ea6829bb377dd20072e3158041e5c5e25" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.273973 5117 generic.go:358] "Generic (PLEG): container finished" podID="7370f172-a96c-42c9-971b-76b5ef52303e" containerID="3f9a11ef5868a7b98f073d251e52c71f53625139b249dc37c4cf10406791dacc" exitCode=0 Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.274532 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hpvcc" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="registry-server" containerID="cri-o://e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f" gracePeriod=2 Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.275225 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-ngpdz" event={"ID":"7370f172-a96c-42c9-971b-76b5ef52303e","Type":"ContainerDied","Data":"3f9a11ef5868a7b98f073d251e52c71f53625139b249dc37c4cf10406791dacc"} Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.275807 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b9hmp" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="registry-server" containerID="cri-o://24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a" gracePeriod=2 Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.380239 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c584ba7_3c7e_4eb3_ab6e_49155e956ab6.slice/crio-conmon-33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c584ba7_3c7e_4eb3_ab6e_49155e956ab6.slice/crio-conmon-33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.380884 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4202452_295a_4f89_bc23_cdbf6c271f02.slice/crio-conmon-956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4202452_295a_4f89_bc23_cdbf6c271f02.slice/crio-conmon-956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.380930 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c584ba7_3c7e_4eb3_ab6e_49155e956ab6.slice/crio-33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c584ba7_3c7e_4eb3_ab6e_49155e956ab6.slice/crio-33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.381615 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4202452_295a_4f89_bc23_cdbf6c271f02.slice/crio-956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4202452_295a_4f89_bc23_cdbf6c271f02.slice/crio-956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.390999 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e9f7c6_ffd2_40f7_82fa_9fab50710838.slice/crio-conmon-24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e9f7c6_ffd2_40f7_82fa_9fab50710838.slice/crio-conmon-24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.391064 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e9f7c6_ffd2_40f7_82fa_9fab50710838.slice/crio-24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e9f7c6_ffd2_40f7_82fa_9fab50710838.slice/crio-24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.391099 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa4c5_96fe_4cf5_b8cb_d61786386452.slice/crio-conmon-0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa4c5_96fe_4cf5_b8cb_d61786386452.slice/crio-conmon-0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.391126 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa4c5_96fe_4cf5_b8cb_d61786386452.slice/crio-0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa4c5_96fe_4cf5_b8cb_d61786386452.slice/crio-0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.412832 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b849f_183c_4227_9dc6_a7dc0d8a6a81.slice/crio-conmon-e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b849f_183c_4227_9dc6_a7dc0d8a6a81.slice/crio-conmon-e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: W0130 00:13:05.412943 5117 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b849f_183c_4227_9dc6_a7dc0d8a6a81.slice/crio-e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b849f_183c_4227_9dc6_a7dc0d8a6a81.slice/crio-e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f.scope: no such file or directory Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.533029 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.538243 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dtfxb_49962195-77dc-47ef-a7dc-e9c1631d049d/kube-multus-additional-cni-plugins/0.log" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.538326 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:13:05 crc kubenswrapper[5117]: E0130 00:13:05.542269 5117 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe73bcd6_db8f_4472_a65f_b7858304bc8b.slice/crio-eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48b76cf6_e8bb_4fb2_92bd_4b1718a794f6.slice/crio-e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b849f_183c_4227_9dc6_a7dc0d8a6a81.slice/crio-2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1cd991b_8078_45cb_9591_ae3f5a4d4db4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48b76cf6_e8bb_4fb2_92bd_4b1718a794f6.slice/crio-conmon-e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb191c78_b1b1_4b69_b609_210416eb3356.slice/crio-e76baec32ee4694a878130ac5c59a178acbac85a0f06a4e1b6ca8abed52ecc60\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49962195_77dc_47ef_a7dc_e9c1631d049d.slice/crio-conmon-19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e9f7c6_ffd2_40f7_82fa_9fab50710838.slice/crio-845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e9f7c6_ffd2_40f7_82fa_9fab50710838.slice/crio-conmon-845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b849f_183c_4227_9dc6_a7dc0d8a6a81.slice/crio-conmon-2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe73bcd6_db8f_4472_a65f_b7858304bc8b.slice/crio-conmon-eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb191c78_b1b1_4b69_b609_210416eb3356.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa4c5_96fe_4cf5_b8cb_d61786386452.slice/crio-conmon-d6b80db46aee6e6c0d623048b017742bf66cb7a0562173d5ec24bf01a9bd0c0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7370f172_a96c_42c9_971b_76b5ef52303e.slice/crio-conmon-3f9a11ef5868a7b98f073d251e52c71f53625139b249dc37c4cf10406791dacc.scope\": RecentStats: unable to find data in memory cache]" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.646250 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49962195-77dc-47ef-a7dc-e9c1631d049d-cni-sysctl-allowlist\") pod \"49962195-77dc-47ef-a7dc-e9c1631d049d\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.646314 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcfbf\" (UniqueName: \"kubernetes.io/projected/c8afa4c5-96fe-4cf5-b8cb-d61786386452-kube-api-access-mcfbf\") pod \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.646387 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2t98\" (UniqueName: \"kubernetes.io/projected/49962195-77dc-47ef-a7dc-e9c1631d049d-kube-api-access-m2t98\") pod \"49962195-77dc-47ef-a7dc-e9c1631d049d\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.646466 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-catalog-content\") pod \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.646491 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/49962195-77dc-47ef-a7dc-e9c1631d049d-ready\") pod \"49962195-77dc-47ef-a7dc-e9c1631d049d\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.646514 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-utilities\") pod \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\" (UID: \"c8afa4c5-96fe-4cf5-b8cb-d61786386452\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.646551 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49962195-77dc-47ef-a7dc-e9c1631d049d-tuning-conf-dir\") pod \"49962195-77dc-47ef-a7dc-e9c1631d049d\" (UID: \"49962195-77dc-47ef-a7dc-e9c1631d049d\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.647100 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49962195-77dc-47ef-a7dc-e9c1631d049d-ready" (OuterVolumeSpecName: "ready") pod "49962195-77dc-47ef-a7dc-e9c1631d049d" (UID: "49962195-77dc-47ef-a7dc-e9c1631d049d"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.647498 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49962195-77dc-47ef-a7dc-e9c1631d049d-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "49962195-77dc-47ef-a7dc-e9c1631d049d" (UID: "49962195-77dc-47ef-a7dc-e9c1631d049d"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.647511 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49962195-77dc-47ef-a7dc-e9c1631d049d-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "49962195-77dc-47ef-a7dc-e9c1631d049d" (UID: "49962195-77dc-47ef-a7dc-e9c1631d049d"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.648764 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-utilities" (OuterVolumeSpecName: "utilities") pod "c8afa4c5-96fe-4cf5-b8cb-d61786386452" (UID: "c8afa4c5-96fe-4cf5-b8cb-d61786386452"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.662585 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49962195-77dc-47ef-a7dc-e9c1631d049d-kube-api-access-m2t98" (OuterVolumeSpecName: "kube-api-access-m2t98") pod "49962195-77dc-47ef-a7dc-e9c1631d049d" (UID: "49962195-77dc-47ef-a7dc-e9c1631d049d"). InnerVolumeSpecName "kube-api-access-m2t98". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.665142 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8afa4c5-96fe-4cf5-b8cb-d61786386452" (UID: "c8afa4c5-96fe-4cf5-b8cb-d61786386452"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.668258 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8afa4c5-96fe-4cf5-b8cb-d61786386452-kube-api-access-mcfbf" (OuterVolumeSpecName: "kube-api-access-mcfbf") pod "c8afa4c5-96fe-4cf5-b8cb-d61786386452" (UID: "c8afa4c5-96fe-4cf5-b8cb-d61786386452"). InnerVolumeSpecName "kube-api-access-mcfbf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.747745 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.747783 5117 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/49962195-77dc-47ef-a7dc-e9c1631d049d-ready\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.747799 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8afa4c5-96fe-4cf5-b8cb-d61786386452-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.747807 5117 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49962195-77dc-47ef-a7dc-e9c1631d049d-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.747816 5117 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49962195-77dc-47ef-a7dc-e9c1631d049d-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.747826 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mcfbf\" (UniqueName: \"kubernetes.io/projected/c8afa4c5-96fe-4cf5-b8cb-d61786386452-kube-api-access-mcfbf\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.747835 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m2t98\" (UniqueName: \"kubernetes.io/projected/49962195-77dc-47ef-a7dc-e9c1631d049d-kube-api-access-m2t98\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.780540 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.838416 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.850301 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4j5p\" (UniqueName: \"kubernetes.io/projected/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-kube-api-access-n4j5p\") pod \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.850537 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-utilities\") pod \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.850609 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-catalog-content\") pod \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\" (UID: \"d8e9f7c6-ffd2-40f7-82fa-9fab50710838\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.852303 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-utilities" (OuterVolumeSpecName: "utilities") pod "d8e9f7c6-ffd2-40f7-82fa-9fab50710838" (UID: "d8e9f7c6-ffd2-40f7-82fa-9fab50710838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.856302 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-kube-api-access-n4j5p" (OuterVolumeSpecName: "kube-api-access-n4j5p") pod "d8e9f7c6-ffd2-40f7-82fa-9fab50710838" (UID: "d8e9f7c6-ffd2-40f7-82fa-9fab50710838"). InnerVolumeSpecName "kube-api-access-n4j5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.910012 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8e9f7c6-ffd2-40f7-82fa-9fab50710838" (UID: "d8e9f7c6-ffd2-40f7-82fa-9fab50710838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.952229 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-catalog-content\") pod \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.952281 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-utilities\") pod \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.952398 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhst2\" (UniqueName: \"kubernetes.io/projected/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-kube-api-access-nhst2\") pod \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\" (UID: \"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81\") " Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.952617 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.952633 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.952646 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4j5p\" (UniqueName: \"kubernetes.io/projected/d8e9f7c6-ffd2-40f7-82fa-9fab50710838-kube-api-access-n4j5p\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.953306 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-utilities" (OuterVolumeSpecName: "utilities") pod "cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" (UID: "cf2b849f-183c-4227-9dc6-a7dc0d8a6a81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.956166 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-kube-api-access-nhst2" (OuterVolumeSpecName: "kube-api-access-nhst2") pod "cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" (UID: "cf2b849f-183c-4227-9dc6-a7dc0d8a6a81"). InnerVolumeSpecName "kube-api-access-nhst2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.970780 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:13:05 crc kubenswrapper[5117]: I0130 00:13:05.983010 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" (UID: "cf2b849f-183c-4227-9dc6-a7dc0d8a6a81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.014764 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.053467 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.053501 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.053511 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhst2\" (UniqueName: \"kubernetes.io/projected/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81-kube-api-access-nhst2\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.284637 5117 generic.go:358] "Generic (PLEG): container finished" podID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerID="e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f" exitCode=0 Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.284750 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpvcc" event={"ID":"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81","Type":"ContainerDied","Data":"e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f"} Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.284773 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpvcc" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.284906 5117 scope.go:117] "RemoveContainer" containerID="e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.284886 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpvcc" event={"ID":"cf2b849f-183c-4227-9dc6-a7dc0d8a6a81","Type":"ContainerDied","Data":"2b44cc134c1be33631a8d3969d85f043d85f6636f80f987c832026840f7862ce"} Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.292508 5117 generic.go:358] "Generic (PLEG): container finished" podID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerID="24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a" exitCode=0 Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.292649 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9hmp" event={"ID":"d8e9f7c6-ffd2-40f7-82fa-9fab50710838","Type":"ContainerDied","Data":"24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a"} Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.292703 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9hmp" event={"ID":"d8e9f7c6-ffd2-40f7-82fa-9fab50710838","Type":"ContainerDied","Data":"2a38f28bc991fb110b4f8df51cdb83df91c9aab40bc1c50e5474ce6fa2cee8ff"} Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.292837 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9hmp" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.300129 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dtfxb_49962195-77dc-47ef-a7dc-e9c1631d049d/kube-multus-additional-cni-plugins/0.log" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.300166 5117 generic.go:358] "Generic (PLEG): container finished" podID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" exitCode=137 Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.300344 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95kbf" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.300678 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" event={"ID":"49962195-77dc-47ef-a7dc-e9c1631d049d","Type":"ContainerDied","Data":"19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769"} Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.300730 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" event={"ID":"49962195-77dc-47ef-a7dc-e9c1631d049d","Type":"ContainerDied","Data":"6c7bd7ba3d58d217aa8759156340e741dbdb9a1588ebd175833a393b2ff3c57e"} Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.300800 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dtfxb" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.329748 5117 scope.go:117] "RemoveContainer" containerID="2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.350644 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpvcc"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.358240 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hpvcc"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.372252 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9hmp"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.376750 5117 scope.go:117] "RemoveContainer" containerID="d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.378471 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b9hmp"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.386336 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95kbf"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.386902 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.391830 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-95kbf"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.397220 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dtfxb"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.397251 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dtfxb"] Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.400214 5117 scope.go:117] "RemoveContainer" containerID="e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f" Jan 30 00:13:06 crc kubenswrapper[5117]: E0130 00:13:06.400592 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f\": container with ID starting with e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f not found: ID does not exist" containerID="e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.400628 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f"} err="failed to get container status \"e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f\": rpc error: code = NotFound desc = could not find container \"e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f\": container with ID starting with e067d63ed046d348f913533125a6a6720dc610b378cf622f15d3b25897a0203f not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.400670 5117 scope.go:117] "RemoveContainer" containerID="2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069" Jan 30 00:13:06 crc kubenswrapper[5117]: E0130 00:13:06.402001 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069\": container with ID starting with 2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069 not found: ID does not exist" containerID="2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.402026 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069"} err="failed to get container status \"2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069\": rpc error: code = NotFound desc = could not find container \"2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069\": container with ID starting with 2cade0907903a0e4bc27c407b5981b0263461951d05f781c17ff0e18a7a37069 not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.402043 5117 scope.go:117] "RemoveContainer" containerID="d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7" Jan 30 00:13:06 crc kubenswrapper[5117]: E0130 00:13:06.402380 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7\": container with ID starting with d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7 not found: ID does not exist" containerID="d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.402408 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7"} err="failed to get container status \"d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7\": rpc error: code = NotFound desc = could not find container \"d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7\": container with ID starting with d1b292ca6370c3ec0dd044e7ac47feafa21f3ddf5131829f31bdc3533142bab7 not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.402423 5117 scope.go:117] "RemoveContainer" containerID="24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.429488 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.440416 5117 scope.go:117] "RemoveContainer" containerID="845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.480171 5117 scope.go:117] "RemoveContainer" containerID="2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.506995 5117 scope.go:117] "RemoveContainer" containerID="24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a" Jan 30 00:13:06 crc kubenswrapper[5117]: E0130 00:13:06.507518 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a\": container with ID starting with 24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a not found: ID does not exist" containerID="24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.507598 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a"} err="failed to get container status \"24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a\": rpc error: code = NotFound desc = could not find container \"24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a\": container with ID starting with 24afbcfa8face3ee789237b9f755e2f488504e2652a38f859ccb0fa05510765a not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.507635 5117 scope.go:117] "RemoveContainer" containerID="845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d" Jan 30 00:13:06 crc kubenswrapper[5117]: E0130 00:13:06.508048 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d\": container with ID starting with 845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d not found: ID does not exist" containerID="845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.508095 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d"} err="failed to get container status \"845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d\": rpc error: code = NotFound desc = could not find container \"845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d\": container with ID starting with 845e630d86c8a3967e5cd4bf6b004d17d42a4ae08ecd6b6f41fa228a6568a25d not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.508127 5117 scope.go:117] "RemoveContainer" containerID="2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2" Jan 30 00:13:06 crc kubenswrapper[5117]: E0130 00:13:06.508502 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2\": container with ID starting with 2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2 not found: ID does not exist" containerID="2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.508576 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2"} err="failed to get container status \"2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2\": rpc error: code = NotFound desc = could not find container \"2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2\": container with ID starting with 2a800e0d7edc115d143737e6d620c7ce4bc6a96c01fb46e4cd20d3bba373dca2 not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.508630 5117 scope.go:117] "RemoveContainer" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.528138 5117 scope.go:117] "RemoveContainer" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" Jan 30 00:13:06 crc kubenswrapper[5117]: E0130 00:13:06.529304 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769\": container with ID starting with 19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769 not found: ID does not exist" containerID="19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.529335 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769"} err="failed to get container status \"19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769\": rpc error: code = NotFound desc = could not find container \"19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769\": container with ID starting with 19948867a8a881d646d0db4d0bb3768b34266573c31be8bf0a55d8948f13d769 not found: ID does not exist" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.602362 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.662495 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgkds\" (UniqueName: \"kubernetes.io/projected/7370f172-a96c-42c9-971b-76b5ef52303e-kube-api-access-fgkds\") pod \"7370f172-a96c-42c9-971b-76b5ef52303e\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.662579 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7370f172-a96c-42c9-971b-76b5ef52303e-serviceca\") pod \"7370f172-a96c-42c9-971b-76b5ef52303e\" (UID: \"7370f172-a96c-42c9-971b-76b5ef52303e\") " Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.664091 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7370f172-a96c-42c9-971b-76b5ef52303e-serviceca" (OuterVolumeSpecName: "serviceca") pod "7370f172-a96c-42c9-971b-76b5ef52303e" (UID: "7370f172-a96c-42c9-971b-76b5ef52303e"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.668951 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7370f172-a96c-42c9-971b-76b5ef52303e-kube-api-access-fgkds" (OuterVolumeSpecName: "kube-api-access-fgkds") pod "7370f172-a96c-42c9-971b-76b5ef52303e" (UID: "7370f172-a96c-42c9-971b-76b5ef52303e"). InnerVolumeSpecName "kube-api-access-fgkds". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.765036 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fgkds\" (UniqueName: \"kubernetes.io/projected/7370f172-a96c-42c9-971b-76b5ef52303e-kube-api-access-fgkds\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:06 crc kubenswrapper[5117]: I0130 00:13:06.765319 5117 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7370f172-a96c-42c9-971b-76b5ef52303e-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.038331 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:13:07 crc kubenswrapper[5117]: E0130 00:13:07.038773 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.050103 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" path="/var/lib/kubelet/pods/49962195-77dc-47ef-a7dc-e9c1631d049d/volumes" Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.051212 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" path="/var/lib/kubelet/pods/c8afa4c5-96fe-4cf5-b8cb-d61786386452/volumes" Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.051962 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" path="/var/lib/kubelet/pods/cf2b849f-183c-4227-9dc6-a7dc0d8a6a81/volumes" Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.053152 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" path="/var/lib/kubelet/pods/d8e9f7c6-ffd2-40f7-82fa-9fab50710838/volumes" Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.312568 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-ngpdz" Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.312596 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-ngpdz" event={"ID":"7370f172-a96c-42c9-971b-76b5ef52303e","Type":"ContainerDied","Data":"6a3a5eeb368f8c8a938467f80169671dfb5efef26f8dc52d707569e3df677f75"} Jan 30 00:13:07 crc kubenswrapper[5117]: I0130 00:13:07.312629 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3a5eeb368f8c8a938467f80169671dfb5efef26f8dc52d707569e3df677f75" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.266112 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7888c87fc5-727nr"] Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.266844 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" podUID="7ca6f152-160f-4292-9590-0950b8efbc34" containerName="controller-manager" containerID="cri-o://c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0" gracePeriod=30 Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.280104 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz"] Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.280386 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" podUID="42fb34a2-9d65-4ba9-aae4-9697cb736b01" containerName="route-controller-manager" containerID="cri-o://a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d" gracePeriod=30 Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.527100 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l95kd"] Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.527820 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l95kd" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="registry-server" containerID="cri-o://b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011" gracePeriod=2 Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.800572 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.841596 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t"] Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844402 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844426 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844435 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844442 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844451 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="extract-utilities" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844457 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="extract-utilities" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844466 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844472 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844481 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="extract-content" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844488 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="extract-content" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844526 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="extract-content" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844533 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="extract-content" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844549 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="extract-content" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844576 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="extract-content" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844584 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844589 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844599 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="extract-utilities" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844605 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="extract-utilities" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844615 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="extract-utilities" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844682 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="extract-utilities" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844724 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42fb34a2-9d65-4ba9-aae4-9697cb736b01" containerName="route-controller-manager" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844730 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fb34a2-9d65-4ba9-aae4-9697cb736b01" containerName="route-controller-manager" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844740 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7370f172-a96c-42c9-971b-76b5ef52303e" containerName="image-pruner" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844746 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7370f172-a96c-42c9-971b-76b5ef52303e" containerName="image-pruner" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844982 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf2b849f-183c-4227-9dc6-a7dc0d8a6a81" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.844997 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="42fb34a2-9d65-4ba9-aae4-9697cb736b01" containerName="route-controller-manager" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.845005 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="49962195-77dc-47ef-a7dc-e9c1631d049d" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.845012 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d8e9f7c6-ffd2-40f7-82fa-9fab50710838" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.845068 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7370f172-a96c-42c9-971b-76b5ef52303e" containerName="image-pruner" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.845077 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="c8afa4c5-96fe-4cf5-b8cb-d61786386452" containerName="registry-server" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.851216 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t"] Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.851383 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.852771 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fb34a2-9d65-4ba9-aae4-9697cb736b01-serving-cert\") pod \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.852842 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-config\") pod \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.852874 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-client-ca\") pod \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.852911 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qktmd\" (UniqueName: \"kubernetes.io/projected/42fb34a2-9d65-4ba9-aae4-9697cb736b01-kube-api-access-qktmd\") pod \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.852935 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fb34a2-9d65-4ba9-aae4-9697cb736b01-tmp\") pod \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\" (UID: \"42fb34a2-9d65-4ba9-aae4-9697cb736b01\") " Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.853942 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42fb34a2-9d65-4ba9-aae4-9697cb736b01-tmp" (OuterVolumeSpecName: "tmp") pod "42fb34a2-9d65-4ba9-aae4-9697cb736b01" (UID: "42fb34a2-9d65-4ba9-aae4-9697cb736b01"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.854346 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-config" (OuterVolumeSpecName: "config") pod "42fb34a2-9d65-4ba9-aae4-9697cb736b01" (UID: "42fb34a2-9d65-4ba9-aae4-9697cb736b01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.854409 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-client-ca" (OuterVolumeSpecName: "client-ca") pod "42fb34a2-9d65-4ba9-aae4-9697cb736b01" (UID: "42fb34a2-9d65-4ba9-aae4-9697cb736b01"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.860593 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fb34a2-9d65-4ba9-aae4-9697cb736b01-kube-api-access-qktmd" (OuterVolumeSpecName: "kube-api-access-qktmd") pod "42fb34a2-9d65-4ba9-aae4-9697cb736b01" (UID: "42fb34a2-9d65-4ba9-aae4-9697cb736b01"). InnerVolumeSpecName "kube-api-access-qktmd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.860613 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fb34a2-9d65-4ba9-aae4-9697cb736b01-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "42fb34a2-9d65-4ba9-aae4-9697cb736b01" (UID: "42fb34a2-9d65-4ba9-aae4-9697cb736b01"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.891760 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.903763 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.903972 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.906586 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.906944 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.955543 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4933f303-e64d-4d37-8c67-b832aefc8def-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.955606 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4933f303-e64d-4d37-8c67-b832aefc8def-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.955637 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333dce6d-4088-4b4a-9256-cd5f0e508e54-serving-cert\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.955678 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhzlm\" (UniqueName: \"kubernetes.io/projected/333dce6d-4088-4b4a-9256-cd5f0e508e54-kube-api-access-nhzlm\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.955728 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-config\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.955935 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/333dce6d-4088-4b4a-9256-cd5f0e508e54-tmp\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.956058 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-client-ca\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.956235 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fb34a2-9d65-4ba9-aae4-9697cb736b01-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.956251 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.956261 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42fb34a2-9d65-4ba9-aae4-9697cb736b01-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.956271 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qktmd\" (UniqueName: \"kubernetes.io/projected/42fb34a2-9d65-4ba9-aae4-9697cb736b01-kube-api-access-qktmd\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.956284 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42fb34a2-9d65-4ba9-aae4-9697cb736b01-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:09 crc kubenswrapper[5117]: I0130 00:13:09.982521 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.004005 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.042799 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-744479dc7b-8pqxp"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043493 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="extract-content" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043520 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="extract-content" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043536 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7ca6f152-160f-4292-9590-0950b8efbc34" containerName="controller-manager" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043852 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca6f152-160f-4292-9590-0950b8efbc34" containerName="controller-manager" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043872 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="extract-utilities" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043879 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="extract-utilities" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043911 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="registry-server" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.043921 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="registry-server" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.044052 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerName="registry-server" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.044067 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7ca6f152-160f-4292-9590-0950b8efbc34" containerName="controller-manager" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.047995 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.048275 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-744479dc7b-8pqxp"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.056975 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca6f152-160f-4292-9590-0950b8efbc34-serving-cert\") pod \"7ca6f152-160f-4292-9590-0950b8efbc34\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057028 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-utilities\") pod \"d4202452-295a-4f89-bc23-cdbf6c271f02\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057158 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-client-ca\") pod \"7ca6f152-160f-4292-9590-0950b8efbc34\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057179 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4cpv\" (UniqueName: \"kubernetes.io/projected/d4202452-295a-4f89-bc23-cdbf6c271f02-kube-api-access-f4cpv\") pod \"d4202452-295a-4f89-bc23-cdbf6c271f02\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057202 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7ca6f152-160f-4292-9590-0950b8efbc34-tmp\") pod \"7ca6f152-160f-4292-9590-0950b8efbc34\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057222 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l22ws\" (UniqueName: \"kubernetes.io/projected/7ca6f152-160f-4292-9590-0950b8efbc34-kube-api-access-l22ws\") pod \"7ca6f152-160f-4292-9590-0950b8efbc34\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057259 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-catalog-content\") pod \"d4202452-295a-4f89-bc23-cdbf6c271f02\" (UID: \"d4202452-295a-4f89-bc23-cdbf6c271f02\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057281 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-proxy-ca-bundles\") pod \"7ca6f152-160f-4292-9590-0950b8efbc34\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057295 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-config\") pod \"7ca6f152-160f-4292-9590-0950b8efbc34\" (UID: \"7ca6f152-160f-4292-9590-0950b8efbc34\") " Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057390 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e319d20e-456e-492b-bd04-a3be934a737c-serving-cert\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057431 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4933f303-e64d-4d37-8c67-b832aefc8def-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057448 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4933f303-e64d-4d37-8c67-b832aefc8def-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057463 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333dce6d-4088-4b4a-9256-cd5f0e508e54-serving-cert\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057491 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e319d20e-456e-492b-bd04-a3be934a737c-tmp\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057509 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhzlm\" (UniqueName: \"kubernetes.io/projected/333dce6d-4088-4b4a-9256-cd5f0e508e54-kube-api-access-nhzlm\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057529 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-config\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057562 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/333dce6d-4088-4b4a-9256-cd5f0e508e54-tmp\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057585 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7lw\" (UniqueName: \"kubernetes.io/projected/e319d20e-456e-492b-bd04-a3be934a737c-kube-api-access-gb7lw\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057608 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-client-ca\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057630 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-client-ca\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057708 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-config\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.057736 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-proxy-ca-bundles\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.060627 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-client-ca\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.060833 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7ca6f152-160f-4292-9590-0950b8efbc34" (UID: "7ca6f152-160f-4292-9590-0950b8efbc34"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.061251 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/333dce6d-4088-4b4a-9256-cd5f0e508e54-tmp\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.061306 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-config" (OuterVolumeSpecName: "config") pod "7ca6f152-160f-4292-9590-0950b8efbc34" (UID: "7ca6f152-160f-4292-9590-0950b8efbc34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.061377 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4933f303-e64d-4d37-8c67-b832aefc8def-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.061675 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-client-ca" (OuterVolumeSpecName: "client-ca") pod "7ca6f152-160f-4292-9590-0950b8efbc34" (UID: "7ca6f152-160f-4292-9590-0950b8efbc34"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.062113 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-config\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.062151 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca6f152-160f-4292-9590-0950b8efbc34-tmp" (OuterVolumeSpecName: "tmp") pod "7ca6f152-160f-4292-9590-0950b8efbc34" (UID: "7ca6f152-160f-4292-9590-0950b8efbc34"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.070923 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca6f152-160f-4292-9590-0950b8efbc34-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7ca6f152-160f-4292-9590-0950b8efbc34" (UID: "7ca6f152-160f-4292-9590-0950b8efbc34"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.071080 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca6f152-160f-4292-9590-0950b8efbc34-kube-api-access-l22ws" (OuterVolumeSpecName: "kube-api-access-l22ws") pod "7ca6f152-160f-4292-9590-0950b8efbc34" (UID: "7ca6f152-160f-4292-9590-0950b8efbc34"). InnerVolumeSpecName "kube-api-access-l22ws". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.071190 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-utilities" (OuterVolumeSpecName: "utilities") pod "d4202452-295a-4f89-bc23-cdbf6c271f02" (UID: "d4202452-295a-4f89-bc23-cdbf6c271f02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.080534 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4202452-295a-4f89-bc23-cdbf6c271f02-kube-api-access-f4cpv" (OuterVolumeSpecName: "kube-api-access-f4cpv") pod "d4202452-295a-4f89-bc23-cdbf6c271f02" (UID: "d4202452-295a-4f89-bc23-cdbf6c271f02"). InnerVolumeSpecName "kube-api-access-f4cpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.086285 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4933f303-e64d-4d37-8c67-b832aefc8def-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.088711 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhzlm\" (UniqueName: \"kubernetes.io/projected/333dce6d-4088-4b4a-9256-cd5f0e508e54-kube-api-access-nhzlm\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.110663 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333dce6d-4088-4b4a-9256-cd5f0e508e54-serving-cert\") pod \"route-controller-manager-7756c44959-jl84t\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158352 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-client-ca\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158415 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-config\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158440 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-proxy-ca-bundles\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158464 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e319d20e-456e-492b-bd04-a3be934a737c-serving-cert\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158496 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e319d20e-456e-492b-bd04-a3be934a737c-tmp\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158535 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7lw\" (UniqueName: \"kubernetes.io/projected/e319d20e-456e-492b-bd04-a3be934a737c-kube-api-access-gb7lw\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158578 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158588 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158597 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4cpv\" (UniqueName: \"kubernetes.io/projected/d4202452-295a-4f89-bc23-cdbf6c271f02-kube-api-access-f4cpv\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.158723 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7ca6f152-160f-4292-9590-0950b8efbc34-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.159266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e319d20e-456e-492b-bd04-a3be934a737c-tmp\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.159592 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l22ws\" (UniqueName: \"kubernetes.io/projected/7ca6f152-160f-4292-9590-0950b8efbc34-kube-api-access-l22ws\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.159612 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.159621 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca6f152-160f-4292-9590-0950b8efbc34-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.159630 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca6f152-160f-4292-9590-0950b8efbc34-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.159788 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-proxy-ca-bundles\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.160386 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-client-ca\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.160613 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-config\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.164779 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e319d20e-456e-492b-bd04-a3be934a737c-serving-cert\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.174078 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4202452-295a-4f89-bc23-cdbf6c271f02" (UID: "d4202452-295a-4f89-bc23-cdbf6c271f02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.181999 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7lw\" (UniqueName: \"kubernetes.io/projected/e319d20e-456e-492b-bd04-a3be934a737c-kube-api-access-gb7lw\") pod \"controller-manager-744479dc7b-8pqxp\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.189465 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.239323 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.263242 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4202452-295a-4f89-bc23-cdbf6c271f02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.349813 5117 generic.go:358] "Generic (PLEG): container finished" podID="d4202452-295a-4f89-bc23-cdbf6c271f02" containerID="b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.349868 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l95kd" event={"ID":"d4202452-295a-4f89-bc23-cdbf6c271f02","Type":"ContainerDied","Data":"b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011"} Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.349918 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l95kd" event={"ID":"d4202452-295a-4f89-bc23-cdbf6c271f02","Type":"ContainerDied","Data":"d4dd41a8f1b12f4e23480bf4f5aa9b6352d61f9a2d431d27b98ce0bcf6595a6c"} Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.349937 5117 scope.go:117] "RemoveContainer" containerID="b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.349952 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l95kd" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.361194 5117 generic.go:358] "Generic (PLEG): container finished" podID="7ca6f152-160f-4292-9590-0950b8efbc34" containerID="c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.361297 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" event={"ID":"7ca6f152-160f-4292-9590-0950b8efbc34","Type":"ContainerDied","Data":"c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0"} Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.361883 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" event={"ID":"7ca6f152-160f-4292-9590-0950b8efbc34","Type":"ContainerDied","Data":"b75f570640ff71b1357c119ec5ee078c417015bdf9f36754bf1efb445472f9f9"} Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.361307 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7888c87fc5-727nr" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.365377 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.366775 5117 generic.go:358] "Generic (PLEG): container finished" podID="42fb34a2-9d65-4ba9-aae4-9697cb736b01" containerID="a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.366916 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" event={"ID":"42fb34a2-9d65-4ba9-aae4-9697cb736b01","Type":"ContainerDied","Data":"a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d"} Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.367030 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" event={"ID":"42fb34a2-9d65-4ba9-aae4-9697cb736b01","Type":"ContainerDied","Data":"30f61c78c94d52dbf6e34a0d9803041c0ebfc94aa4923c0d89cad6abd2b3ea58"} Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.367859 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.394334 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l95kd"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.397076 5117 scope.go:117] "RemoveContainer" containerID="956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.409730 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l95kd"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.418761 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.439841 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599dc665bd-t9mpz"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.443725 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7888c87fc5-727nr"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.447137 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7888c87fc5-727nr"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.462743 5117 scope.go:117] "RemoveContainer" containerID="b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.501999 5117 scope.go:117] "RemoveContainer" containerID="b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011" Jan 30 00:13:10 crc kubenswrapper[5117]: E0130 00:13:10.502429 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011\": container with ID starting with b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011 not found: ID does not exist" containerID="b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.502462 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011"} err="failed to get container status \"b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011\": rpc error: code = NotFound desc = could not find container \"b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011\": container with ID starting with b7ef7c79bca21431d70aaa7b082f257c34e3e2070aea0e27ea1a0567affc4011 not found: ID does not exist" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.502483 5117 scope.go:117] "RemoveContainer" containerID="956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea" Jan 30 00:13:10 crc kubenswrapper[5117]: E0130 00:13:10.502685 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea\": container with ID starting with 956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea not found: ID does not exist" containerID="956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.502757 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea"} err="failed to get container status \"956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea\": rpc error: code = NotFound desc = could not find container \"956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea\": container with ID starting with 956d548612d97f2b0df364643eccb46012babb48317f5e276555fc27f4f53bea not found: ID does not exist" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.502769 5117 scope.go:117] "RemoveContainer" containerID="b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354" Jan 30 00:13:10 crc kubenswrapper[5117]: E0130 00:13:10.502940 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354\": container with ID starting with b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354 not found: ID does not exist" containerID="b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.502960 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354"} err="failed to get container status \"b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354\": rpc error: code = NotFound desc = could not find container \"b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354\": container with ID starting with b3ccd4c172f60ec4c023a67f56cf4fc7f3ecb6022769cfbcbefd380f41f12354 not found: ID does not exist" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.502972 5117 scope.go:117] "RemoveContainer" containerID="c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.540717 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.544474 5117 scope.go:117] "RemoveContainer" containerID="c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0" Jan 30 00:13:10 crc kubenswrapper[5117]: E0130 00:13:10.546968 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0\": container with ID starting with c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0 not found: ID does not exist" containerID="c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.547019 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0"} err="failed to get container status \"c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0\": rpc error: code = NotFound desc = could not find container \"c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0\": container with ID starting with c4ff52e6534db830f9198395d1402fa0cf4d3e1fe87cc11f7ffc5125a8d3e8d0 not found: ID does not exist" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.547055 5117 scope.go:117] "RemoveContainer" containerID="a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.578347 5117 scope.go:117] "RemoveContainer" containerID="a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d" Jan 30 00:13:10 crc kubenswrapper[5117]: E0130 00:13:10.579900 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d\": container with ID starting with a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d not found: ID does not exist" containerID="a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.579953 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d"} err="failed to get container status \"a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d\": rpc error: code = NotFound desc = could not find container \"a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d\": container with ID starting with a3424e2797fec6068c8a56adae945f74a0bd2ef2fd5e58b2a419d9ce5299732d not found: ID does not exist" Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.621582 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-744479dc7b-8pqxp"] Jan 30 00:13:10 crc kubenswrapper[5117]: W0130 00:13:10.631102 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode319d20e_456e_492b_bd04_a3be934a737c.slice/crio-9587a65cb251bdf86e82a220c5dcd01c3b15b2ca39ea7249e01e14391ab2edf6 WatchSource:0}: Error finding container 9587a65cb251bdf86e82a220c5dcd01c3b15b2ca39ea7249e01e14391ab2edf6: Status 404 returned error can't find the container with id 9587a65cb251bdf86e82a220c5dcd01c3b15b2ca39ea7249e01e14391ab2edf6 Jan 30 00:13:10 crc kubenswrapper[5117]: I0130 00:13:10.639151 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t"] Jan 30 00:13:10 crc kubenswrapper[5117]: W0130 00:13:10.648569 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod333dce6d_4088_4b4a_9256_cd5f0e508e54.slice/crio-51801da429b76e058d4d9f5db371470c422153c61a5799defb0f5925324eb8a8 WatchSource:0}: Error finding container 51801da429b76e058d4d9f5db371470c422153c61a5799defb0f5925324eb8a8: Status 404 returned error can't find the container with id 51801da429b76e058d4d9f5db371470c422153c61a5799defb0f5925324eb8a8 Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.046075 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42fb34a2-9d65-4ba9-aae4-9697cb736b01" path="/var/lib/kubelet/pods/42fb34a2-9d65-4ba9-aae4-9697cb736b01/volumes" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.047735 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca6f152-160f-4292-9590-0950b8efbc34" path="/var/lib/kubelet/pods/7ca6f152-160f-4292-9590-0950b8efbc34/volumes" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.048528 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4202452-295a-4f89-bc23-cdbf6c271f02" path="/var/lib/kubelet/pods/d4202452-295a-4f89-bc23-cdbf6c271f02/volumes" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.377188 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" event={"ID":"333dce6d-4088-4b4a-9256-cd5f0e508e54","Type":"ContainerStarted","Data":"f89ce982a95bb7241a387a61cf33de4cdd824addf986dcf54a72d48fe8e88308"} Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.377226 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" event={"ID":"333dce6d-4088-4b4a-9256-cd5f0e508e54","Type":"ContainerStarted","Data":"51801da429b76e058d4d9f5db371470c422153c61a5799defb0f5925324eb8a8"} Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.377567 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.379603 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" event={"ID":"e319d20e-456e-492b-bd04-a3be934a737c","Type":"ContainerStarted","Data":"d767d75a5b0ac47883d51e8d603d78dc7b3f1f390320a8fe0b674e7ac07db63d"} Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.379631 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" event={"ID":"e319d20e-456e-492b-bd04-a3be934a737c","Type":"ContainerStarted","Data":"9587a65cb251bdf86e82a220c5dcd01c3b15b2ca39ea7249e01e14391ab2edf6"} Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.380068 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.382122 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4933f303-e64d-4d37-8c67-b832aefc8def","Type":"ContainerStarted","Data":"b5414f335dd4dfe4ce193fedf843ac96f2460c7e4740470cec056aa1bd381e5d"} Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.382168 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4933f303-e64d-4d37-8c67-b832aefc8def","Type":"ContainerStarted","Data":"a04eba8046bb87b66e664ccac5c01d00161efc8d8d6ef94448d14f0593036d38"} Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.386329 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.417462 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" podStartSLOduration=2.417445871 podStartE2EDuration="2.417445871s" podCreationTimestamp="2026-01-30 00:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:11.400954028 +0000 UTC m=+154.512489928" watchObservedRunningTime="2026-01-30 00:13:11.417445871 +0000 UTC m=+154.528981761" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.418266 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.4182607640000002 podStartE2EDuration="2.418260764s" podCreationTimestamp="2026-01-30 00:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:11.416755582 +0000 UTC m=+154.528291492" watchObservedRunningTime="2026-01-30 00:13:11.418260764 +0000 UTC m=+154.529796654" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.485438 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" podStartSLOduration=2.485415429 podStartE2EDuration="2.485415429s" podCreationTimestamp="2026-01-30 00:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:11.460976793 +0000 UTC m=+154.572512713" watchObservedRunningTime="2026-01-30 00:13:11.485415429 +0000 UTC m=+154.596951319" Jan 30 00:13:11 crc kubenswrapper[5117]: I0130 00:13:11.948258 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:12 crc kubenswrapper[5117]: I0130 00:13:12.390373 5117 generic.go:358] "Generic (PLEG): container finished" podID="4933f303-e64d-4d37-8c67-b832aefc8def" containerID="b5414f335dd4dfe4ce193fedf843ac96f2460c7e4740470cec056aa1bd381e5d" exitCode=0 Jan 30 00:13:12 crc kubenswrapper[5117]: I0130 00:13:12.390980 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4933f303-e64d-4d37-8c67-b832aefc8def","Type":"ContainerDied","Data":"b5414f335dd4dfe4ce193fedf843ac96f2460c7e4740470cec056aa1bd381e5d"} Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.690049 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.719784 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4933f303-e64d-4d37-8c67-b832aefc8def-kube-api-access\") pod \"4933f303-e64d-4d37-8c67-b832aefc8def\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.719900 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4933f303-e64d-4d37-8c67-b832aefc8def-kubelet-dir\") pod \"4933f303-e64d-4d37-8c67-b832aefc8def\" (UID: \"4933f303-e64d-4d37-8c67-b832aefc8def\") " Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.720183 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4933f303-e64d-4d37-8c67-b832aefc8def-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4933f303-e64d-4d37-8c67-b832aefc8def" (UID: "4933f303-e64d-4d37-8c67-b832aefc8def"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.730134 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4933f303-e64d-4d37-8c67-b832aefc8def-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4933f303-e64d-4d37-8c67-b832aefc8def" (UID: "4933f303-e64d-4d37-8c67-b832aefc8def"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.798378 5117 ???:1] "http: TLS handshake error from 192.168.126.11:46532: no serving certificate available for the kubelet" Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.821481 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4933f303-e64d-4d37-8c67-b832aefc8def-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:13 crc kubenswrapper[5117]: I0130 00:13:13.821548 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4933f303-e64d-4d37-8c67-b832aefc8def-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.401592 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.401634 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4933f303-e64d-4d37-8c67-b832aefc8def","Type":"ContainerDied","Data":"a04eba8046bb87b66e664ccac5c01d00161efc8d8d6ef94448d14f0593036d38"} Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.401701 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a04eba8046bb87b66e664ccac5c01d00161efc8d8d6ef94448d14f0593036d38" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.677627 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.678154 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4933f303-e64d-4d37-8c67-b832aefc8def" containerName="pruner" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.678171 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="4933f303-e64d-4d37-8c67-b832aefc8def" containerName="pruner" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.678276 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="4933f303-e64d-4d37-8c67-b832aefc8def" containerName="pruner" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.686835 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.689284 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.689914 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.690952 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.733016 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.733090 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-var-lock\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.733349 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4847705e-44a0-41dc-85cf-ac809578afe8-kube-api-access\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.834643 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4847705e-44a0-41dc-85cf-ac809578afe8-kube-api-access\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.834716 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.834768 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-var-lock\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.834902 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-var-lock\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.834930 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:14 crc kubenswrapper[5117]: I0130 00:13:14.857636 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4847705e-44a0-41dc-85cf-ac809578afe8-kube-api-access\") pod \"installer-12-crc\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:15 crc kubenswrapper[5117]: I0130 00:13:15.007429 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:15 crc kubenswrapper[5117]: I0130 00:13:15.454344 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:15 crc kubenswrapper[5117]: E0130 00:13:15.679377 5117 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1cd991b_8078_45cb_9591_ae3f5a4d4db4.slice/crio-9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad\": RecentStats: unable to find data in memory cache]" Jan 30 00:13:16 crc kubenswrapper[5117]: I0130 00:13:16.413546 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4847705e-44a0-41dc-85cf-ac809578afe8","Type":"ContainerStarted","Data":"2e2b5b8b110fcb73a195fa2e46ae116203b98c989e26dae5e8dc8ac4556e27c1"} Jan 30 00:13:16 crc kubenswrapper[5117]: I0130 00:13:16.414050 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4847705e-44a0-41dc-85cf-ac809578afe8","Type":"ContainerStarted","Data":"756f6c491f04892d88b7550a5c9d04d40c231e62c5f5f6f1cd1926f9397f6085"} Jan 30 00:13:18 crc kubenswrapper[5117]: I0130 00:13:18.037959 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:13:18 crc kubenswrapper[5117]: E0130 00:13:18.038683 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:19 crc kubenswrapper[5117]: I0130 00:13:19.432279 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=5.432257169 podStartE2EDuration="5.432257169s" podCreationTimestamp="2026-01-30 00:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:16.447881107 +0000 UTC m=+159.559416997" watchObservedRunningTime="2026-01-30 00:13:19.432257169 +0000 UTC m=+162.543793059" Jan 30 00:13:19 crc kubenswrapper[5117]: I0130 00:13:19.435805 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgbnh"] Jan 30 00:13:25 crc kubenswrapper[5117]: E0130 00:13:25.798766 5117 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1cd991b_8078_45cb_9591_ae3f5a4d4db4.slice/crio-9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad\": RecentStats: unable to find data in memory cache]" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.208734 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-744479dc7b-8pqxp"] Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.209286 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" podUID="e319d20e-456e-492b-bd04-a3be934a737c" containerName="controller-manager" containerID="cri-o://d767d75a5b0ac47883d51e8d603d78dc7b3f1f390320a8fe0b674e7ac07db63d" gracePeriod=30 Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.234772 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t"] Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.235009 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" podUID="333dce6d-4088-4b4a-9256-cd5f0e508e54" containerName="route-controller-manager" containerID="cri-o://f89ce982a95bb7241a387a61cf33de4cdd824addf986dcf54a72d48fe8e88308" gracePeriod=30 Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.491065 5117 generic.go:358] "Generic (PLEG): container finished" podID="333dce6d-4088-4b4a-9256-cd5f0e508e54" containerID="f89ce982a95bb7241a387a61cf33de4cdd824addf986dcf54a72d48fe8e88308" exitCode=0 Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.491169 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" event={"ID":"333dce6d-4088-4b4a-9256-cd5f0e508e54","Type":"ContainerDied","Data":"f89ce982a95bb7241a387a61cf33de4cdd824addf986dcf54a72d48fe8e88308"} Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.493728 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" event={"ID":"e319d20e-456e-492b-bd04-a3be934a737c","Type":"ContainerDied","Data":"d767d75a5b0ac47883d51e8d603d78dc7b3f1f390320a8fe0b674e7ac07db63d"} Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.493679 5117 generic.go:358] "Generic (PLEG): container finished" podID="e319d20e-456e-492b-bd04-a3be934a737c" containerID="d767d75a5b0ac47883d51e8d603d78dc7b3f1f390320a8fe0b674e7ac07db63d" exitCode=0 Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.683888 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.708125 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg"] Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.708748 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="333dce6d-4088-4b4a-9256-cd5f0e508e54" containerName="route-controller-manager" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.708759 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="333dce6d-4088-4b4a-9256-cd5f0e508e54" containerName="route-controller-manager" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.708864 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="333dce6d-4088-4b4a-9256-cd5f0e508e54" containerName="route-controller-manager" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.716871 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.718672 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg"] Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836132 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-config\") pod \"333dce6d-4088-4b4a-9256-cd5f0e508e54\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836176 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/333dce6d-4088-4b4a-9256-cd5f0e508e54-tmp\") pod \"333dce6d-4088-4b4a-9256-cd5f0e508e54\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836209 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333dce6d-4088-4b4a-9256-cd5f0e508e54-serving-cert\") pod \"333dce6d-4088-4b4a-9256-cd5f0e508e54\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836245 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-client-ca\") pod \"333dce6d-4088-4b4a-9256-cd5f0e508e54\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836294 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhzlm\" (UniqueName: \"kubernetes.io/projected/333dce6d-4088-4b4a-9256-cd5f0e508e54-kube-api-access-nhzlm\") pod \"333dce6d-4088-4b4a-9256-cd5f0e508e54\" (UID: \"333dce6d-4088-4b4a-9256-cd5f0e508e54\") " Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836458 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-client-ca\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836489 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eddc800-8195-45d6-a456-dc5f98e8a68a-serving-cert\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836507 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-config\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836560 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhg4\" (UniqueName: \"kubernetes.io/projected/4eddc800-8195-45d6-a456-dc5f98e8a68a-kube-api-access-kbhg4\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836580 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4eddc800-8195-45d6-a456-dc5f98e8a68a-tmp\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836859 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/333dce6d-4088-4b4a-9256-cd5f0e508e54-tmp" (OuterVolumeSpecName: "tmp") pod "333dce6d-4088-4b4a-9256-cd5f0e508e54" (UID: "333dce6d-4088-4b4a-9256-cd5f0e508e54"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.836973 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-config" (OuterVolumeSpecName: "config") pod "333dce6d-4088-4b4a-9256-cd5f0e508e54" (UID: "333dce6d-4088-4b4a-9256-cd5f0e508e54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.837315 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-client-ca" (OuterVolumeSpecName: "client-ca") pod "333dce6d-4088-4b4a-9256-cd5f0e508e54" (UID: "333dce6d-4088-4b4a-9256-cd5f0e508e54"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.848607 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333dce6d-4088-4b4a-9256-cd5f0e508e54-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "333dce6d-4088-4b4a-9256-cd5f0e508e54" (UID: "333dce6d-4088-4b4a-9256-cd5f0e508e54"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.848945 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333dce6d-4088-4b4a-9256-cd5f0e508e54-kube-api-access-nhzlm" (OuterVolumeSpecName: "kube-api-access-nhzlm") pod "333dce6d-4088-4b4a-9256-cd5f0e508e54" (UID: "333dce6d-4088-4b4a-9256-cd5f0e508e54"). InnerVolumeSpecName "kube-api-access-nhzlm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.903118 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.928255 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb"] Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.928814 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e319d20e-456e-492b-bd04-a3be934a737c" containerName="controller-manager" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.928835 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e319d20e-456e-492b-bd04-a3be934a737c" containerName="controller-manager" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.928970 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="e319d20e-456e-492b-bd04-a3be934a737c" containerName="controller-manager" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.936213 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.939785 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb"] Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963313 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gf46\" (UniqueName: \"kubernetes.io/projected/c54510cc-dec8-47a8-9889-5a0cdf023dbd-kube-api-access-5gf46\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963401 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-config\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963437 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-client-ca\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963570 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-client-ca\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963614 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54510cc-dec8-47a8-9889-5a0cdf023dbd-serving-cert\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963653 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c54510cc-dec8-47a8-9889-5a0cdf023dbd-tmp\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963811 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eddc800-8195-45d6-a456-dc5f98e8a68a-serving-cert\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963850 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-config\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963961 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhg4\" (UniqueName: \"kubernetes.io/projected/4eddc800-8195-45d6-a456-dc5f98e8a68a-kube-api-access-kbhg4\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.963992 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4eddc800-8195-45d6-a456-dc5f98e8a68a-tmp\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.964010 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-proxy-ca-bundles\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.964082 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.964099 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/333dce6d-4088-4b4a-9256-cd5f0e508e54-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.964111 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333dce6d-4088-4b4a-9256-cd5f0e508e54-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.964122 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333dce6d-4088-4b4a-9256-cd5f0e508e54-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.964135 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhzlm\" (UniqueName: \"kubernetes.io/projected/333dce6d-4088-4b4a-9256-cd5f0e508e54-kube-api-access-nhzlm\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.964749 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4eddc800-8195-45d6-a456-dc5f98e8a68a-tmp\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.965060 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-client-ca\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.965384 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-config\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.971018 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eddc800-8195-45d6-a456-dc5f98e8a68a-serving-cert\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:29 crc kubenswrapper[5117]: I0130 00:13:29.999485 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhg4\" (UniqueName: \"kubernetes.io/projected/4eddc800-8195-45d6-a456-dc5f98e8a68a-kube-api-access-kbhg4\") pod \"route-controller-manager-56c79b5987-f9ldg\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.042919 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064443 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-proxy-ca-bundles\") pod \"e319d20e-456e-492b-bd04-a3be934a737c\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064487 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-config\") pod \"e319d20e-456e-492b-bd04-a3be934a737c\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064510 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e319d20e-456e-492b-bd04-a3be934a737c-serving-cert\") pod \"e319d20e-456e-492b-bd04-a3be934a737c\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064535 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e319d20e-456e-492b-bd04-a3be934a737c-tmp\") pod \"e319d20e-456e-492b-bd04-a3be934a737c\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064587 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-client-ca\") pod \"e319d20e-456e-492b-bd04-a3be934a737c\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064610 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb7lw\" (UniqueName: \"kubernetes.io/projected/e319d20e-456e-492b-bd04-a3be934a737c-kube-api-access-gb7lw\") pod \"e319d20e-456e-492b-bd04-a3be934a737c\" (UID: \"e319d20e-456e-492b-bd04-a3be934a737c\") " Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064707 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-proxy-ca-bundles\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064737 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gf46\" (UniqueName: \"kubernetes.io/projected/c54510cc-dec8-47a8-9889-5a0cdf023dbd-kube-api-access-5gf46\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064767 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-config\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064799 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-client-ca\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064833 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54510cc-dec8-47a8-9889-5a0cdf023dbd-serving-cert\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.064853 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c54510cc-dec8-47a8-9889-5a0cdf023dbd-tmp\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.065350 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c54510cc-dec8-47a8-9889-5a0cdf023dbd-tmp\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.066115 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e319d20e-456e-492b-bd04-a3be934a737c-tmp" (OuterVolumeSpecName: "tmp") pod "e319d20e-456e-492b-bd04-a3be934a737c" (UID: "e319d20e-456e-492b-bd04-a3be934a737c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.066687 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-client-ca" (OuterVolumeSpecName: "client-ca") pod "e319d20e-456e-492b-bd04-a3be934a737c" (UID: "e319d20e-456e-492b-bd04-a3be934a737c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.066723 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-proxy-ca-bundles\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.066676 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e319d20e-456e-492b-bd04-a3be934a737c" (UID: "e319d20e-456e-492b-bd04-a3be934a737c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.066903 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-config" (OuterVolumeSpecName: "config") pod "e319d20e-456e-492b-bd04-a3be934a737c" (UID: "e319d20e-456e-492b-bd04-a3be934a737c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.066921 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-client-ca\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.067451 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-config\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.069520 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e319d20e-456e-492b-bd04-a3be934a737c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e319d20e-456e-492b-bd04-a3be934a737c" (UID: "e319d20e-456e-492b-bd04-a3be934a737c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.069955 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e319d20e-456e-492b-bd04-a3be934a737c-kube-api-access-gb7lw" (OuterVolumeSpecName: "kube-api-access-gb7lw") pod "e319d20e-456e-492b-bd04-a3be934a737c" (UID: "e319d20e-456e-492b-bd04-a3be934a737c"). InnerVolumeSpecName "kube-api-access-gb7lw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.072214 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54510cc-dec8-47a8-9889-5a0cdf023dbd-serving-cert\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.085509 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gf46\" (UniqueName: \"kubernetes.io/projected/c54510cc-dec8-47a8-9889-5a0cdf023dbd-kube-api-access-5gf46\") pod \"controller-manager-6f4bb58df6-w5tdb\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.165598 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e319d20e-456e-492b-bd04-a3be934a737c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.165881 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e319d20e-456e-492b-bd04-a3be934a737c-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.165893 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.165902 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gb7lw\" (UniqueName: \"kubernetes.io/projected/e319d20e-456e-492b-bd04-a3be934a737c-kube-api-access-gb7lw\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.165911 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.165921 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e319d20e-456e-492b-bd04-a3be934a737c-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.300955 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.462969 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg"] Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.502751 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" event={"ID":"333dce6d-4088-4b4a-9256-cd5f0e508e54","Type":"ContainerDied","Data":"51801da429b76e058d4d9f5db371470c422153c61a5799defb0f5925324eb8a8"} Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.502894 5117 scope.go:117] "RemoveContainer" containerID="f89ce982a95bb7241a387a61cf33de4cdd824addf986dcf54a72d48fe8e88308" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.502776 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.507337 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" event={"ID":"e319d20e-456e-492b-bd04-a3be934a737c","Type":"ContainerDied","Data":"9587a65cb251bdf86e82a220c5dcd01c3b15b2ca39ea7249e01e14391ab2edf6"} Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.507481 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-744479dc7b-8pqxp" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.508867 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" event={"ID":"4eddc800-8195-45d6-a456-dc5f98e8a68a","Type":"ContainerStarted","Data":"19f177f2a3aacd72b5ae0b02eee58d1f7e1f9da73d5375642a6f110e3d923a9b"} Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.543661 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-744479dc7b-8pqxp"] Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.546592 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-744479dc7b-8pqxp"] Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.548224 5117 scope.go:117] "RemoveContainer" containerID="d767d75a5b0ac47883d51e8d603d78dc7b3f1f390320a8fe0b674e7ac07db63d" Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.561983 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t"] Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.571550 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7756c44959-jl84t"] Jan 30 00:13:30 crc kubenswrapper[5117]: I0130 00:13:30.690711 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb"] Jan 30 00:13:30 crc kubenswrapper[5117]: W0130 00:13:30.699896 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc54510cc_dec8_47a8_9889_5a0cdf023dbd.slice/crio-c58e393851e65b912f3a90531b93f128d2b296449ec99d908754e97a3493a7cc WatchSource:0}: Error finding container c58e393851e65b912f3a90531b93f128d2b296449ec99d908754e97a3493a7cc: Status 404 returned error can't find the container with id c58e393851e65b912f3a90531b93f128d2b296449ec99d908754e97a3493a7cc Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.039429 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:13:31 crc kubenswrapper[5117]: E0130 00:13:31.040475 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.059137 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="333dce6d-4088-4b4a-9256-cd5f0e508e54" path="/var/lib/kubelet/pods/333dce6d-4088-4b4a-9256-cd5f0e508e54/volumes" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.060321 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e319d20e-456e-492b-bd04-a3be934a737c" path="/var/lib/kubelet/pods/e319d20e-456e-492b-bd04-a3be934a737c/volumes" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.518405 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" event={"ID":"4eddc800-8195-45d6-a456-dc5f98e8a68a","Type":"ContainerStarted","Data":"6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0"} Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.519810 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.522298 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" event={"ID":"c54510cc-dec8-47a8-9889-5a0cdf023dbd","Type":"ContainerStarted","Data":"756ffb9a21287927e85ef5ffbd93cd00ab716ae4dd6373d10350ebdfea434ee4"} Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.522338 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" event={"ID":"c54510cc-dec8-47a8-9889-5a0cdf023dbd","Type":"ContainerStarted","Data":"c58e393851e65b912f3a90531b93f128d2b296449ec99d908754e97a3493a7cc"} Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.523006 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.528960 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.542192 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" podStartSLOduration=2.542176257 podStartE2EDuration="2.542176257s" podCreationTimestamp="2026-01-30 00:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:31.541856838 +0000 UTC m=+174.653392728" watchObservedRunningTime="2026-01-30 00:13:31.542176257 +0000 UTC m=+174.653712147" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.564796 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" podStartSLOduration=2.56475374 podStartE2EDuration="2.56475374s" podCreationTimestamp="2026-01-30 00:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:31.560166632 +0000 UTC m=+174.671702542" watchObservedRunningTime="2026-01-30 00:13:31.56475374 +0000 UTC m=+174.676289630" Jan 30 00:13:31 crc kubenswrapper[5117]: I0130 00:13:31.681116 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:35 crc kubenswrapper[5117]: E0130 00:13:35.937759 5117 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1cd991b_8078_45cb_9591_ae3f5a4d4db4.slice/crio-9fca5242733fe45e3dd1750021ff92dffaaafb009e183bdb4662ee896aa41fad\": RecentStats: unable to find data in memory cache]" Jan 30 00:13:44 crc kubenswrapper[5117]: I0130 00:13:44.482607 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" podUID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" containerName="oauth-openshift" containerID="cri-o://bffa4609cdef287849b80d2ce5fa955eb891f139222b56fefe77cdb1450ec17f" gracePeriod=15 Jan 30 00:13:44 crc kubenswrapper[5117]: I0130 00:13:44.627030 5117 generic.go:358] "Generic (PLEG): container finished" podID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" containerID="bffa4609cdef287849b80d2ce5fa955eb891f139222b56fefe77cdb1450ec17f" exitCode=0 Jan 30 00:13:44 crc kubenswrapper[5117]: I0130 00:13:44.627157 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" event={"ID":"2cf47fab-c86d-4283-b285-b4ca795bf6d6","Type":"ContainerDied","Data":"bffa4609cdef287849b80d2ce5fa955eb891f139222b56fefe77cdb1450ec17f"} Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.012493 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.081596 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-579d78cbf5-9sxfd"] Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.082176 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" containerName="oauth-openshift" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.082192 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" containerName="oauth-openshift" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.082291 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" containerName="oauth-openshift" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085047 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl96g\" (UniqueName: \"kubernetes.io/projected/2cf47fab-c86d-4283-b285-b4ca795bf6d6-kube-api-access-bl96g\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085117 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-cliconfig\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085144 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-trusted-ca-bundle\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085178 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-router-certs\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085231 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-error\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085267 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-provider-selection\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085311 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-ocp-branding-template\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085353 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-dir\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085379 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-service-ca\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085417 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-idp-0-file-data\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085500 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-session\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085535 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-policies\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085583 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-serving-cert\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.085609 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-login\") pod \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\" (UID: \"2cf47fab-c86d-4283-b285-b4ca795bf6d6\") " Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.086928 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.088494 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.089291 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.090780 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.104289 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.104910 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.105053 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.105261 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.105477 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.107271 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.108087 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.106286 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.118296 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.118472 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-579d78cbf5-9sxfd"] Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.119097 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf47fab-c86d-4283-b285-b4ca795bf6d6-kube-api-access-bl96g" (OuterVolumeSpecName: "kube-api-access-bl96g") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "kube-api-access-bl96g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.119512 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2cf47fab-c86d-4283-b285-b4ca795bf6d6" (UID: "2cf47fab-c86d-4283-b285-b4ca795bf6d6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.186871 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-error\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.186942 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.186968 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-audit-policies\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187158 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-session\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187295 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fb13878-7b3a-4342-a63e-de70ea0b46e9-audit-dir\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187337 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mbpg\" (UniqueName: \"kubernetes.io/projected/1fb13878-7b3a-4342-a63e-de70ea0b46e9-kube-api-access-4mbpg\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187415 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187451 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187531 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187614 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187660 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187774 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-login\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.187830 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189006 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189655 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189742 5117 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189765 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189788 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189812 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189831 5117 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189850 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189872 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189892 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bl96g\" (UniqueName: \"kubernetes.io/projected/2cf47fab-c86d-4283-b285-b4ca795bf6d6-kube-api-access-bl96g\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189912 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189932 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189952 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189971 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.189991 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2cf47fab-c86d-4283-b285-b4ca795bf6d6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.291125 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.291187 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-error\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.291211 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.291257 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-audit-policies\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.292269 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-audit-policies\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.291295 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-session\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.292726 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fb13878-7b3a-4342-a63e-de70ea0b46e9-audit-dir\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.292781 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mbpg\" (UniqueName: \"kubernetes.io/projected/1fb13878-7b3a-4342-a63e-de70ea0b46e9-kube-api-access-4mbpg\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.292836 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.292876 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.292900 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.292960 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fb13878-7b3a-4342-a63e-de70ea0b46e9-audit-dir\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.293496 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.293596 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.293654 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.293749 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-login\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.293805 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.294353 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.295735 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.295926 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.296030 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-session\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.296070 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.296097 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-error\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.298324 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.298970 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-user-template-login\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.299197 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.300751 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fb13878-7b3a-4342-a63e-de70ea0b46e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.308166 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mbpg\" (UniqueName: \"kubernetes.io/projected/1fb13878-7b3a-4342-a63e-de70ea0b46e9-kube-api-access-4mbpg\") pod \"oauth-openshift-579d78cbf5-9sxfd\" (UID: \"1fb13878-7b3a-4342-a63e-de70ea0b46e9\") " pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.466167 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.634115 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" event={"ID":"2cf47fab-c86d-4283-b285-b4ca795bf6d6","Type":"ContainerDied","Data":"93e62e159b3d05ca267941c64b096d3f41553153e7deb2d81740f45f497368b6"} Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.634164 5117 scope.go:117] "RemoveContainer" containerID="bffa4609cdef287849b80d2ce5fa955eb891f139222b56fefe77cdb1450ec17f" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.634295 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgbnh" Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.672402 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgbnh"] Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.676418 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgbnh"] Jan 30 00:13:45 crc kubenswrapper[5117]: I0130 00:13:45.904376 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-579d78cbf5-9sxfd"] Jan 30 00:13:46 crc kubenswrapper[5117]: I0130 00:13:46.037110 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:13:46 crc kubenswrapper[5117]: E0130 00:13:46.037576 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:13:46 crc kubenswrapper[5117]: I0130 00:13:46.640993 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" event={"ID":"1fb13878-7b3a-4342-a63e-de70ea0b46e9","Type":"ContainerStarted","Data":"b69c66a2664257bb1072ea972669f994cfc919d90c94d9dfd2f796562fcb30b9"} Jan 30 00:13:46 crc kubenswrapper[5117]: I0130 00:13:46.641664 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" event={"ID":"1fb13878-7b3a-4342-a63e-de70ea0b46e9","Type":"ContainerStarted","Data":"70a04fe935f2f6b23a0b2346c6ed67ce717296b10a84a01a5fdd538c18a4c432"} Jan 30 00:13:46 crc kubenswrapper[5117]: I0130 00:13:46.641707 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:46 crc kubenswrapper[5117]: I0130 00:13:46.646921 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" Jan 30 00:13:46 crc kubenswrapper[5117]: I0130 00:13:46.659193 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-579d78cbf5-9sxfd" podStartSLOduration=27.659174976 podStartE2EDuration="27.659174976s" podCreationTimestamp="2026-01-30 00:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:46.65754644 +0000 UTC m=+189.769082350" watchObservedRunningTime="2026-01-30 00:13:46.659174976 +0000 UTC m=+189.770710866" Jan 30 00:13:47 crc kubenswrapper[5117]: I0130 00:13:47.043302 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf47fab-c86d-4283-b285-b4ca795bf6d6" path="/var/lib/kubelet/pods/2cf47fab-c86d-4283-b285-b4ca795bf6d6/volumes" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.259739 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb"] Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.260101 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" podUID="c54510cc-dec8-47a8-9889-5a0cdf023dbd" containerName="controller-manager" containerID="cri-o://756ffb9a21287927e85ef5ffbd93cd00ab716ae4dd6373d10350ebdfea434ee4" gracePeriod=30 Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.276215 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg"] Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.277045 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" podUID="4eddc800-8195-45d6-a456-dc5f98e8a68a" containerName="route-controller-manager" containerID="cri-o://6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0" gracePeriod=30 Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.656401 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.670349 5117 generic.go:358] "Generic (PLEG): container finished" podID="c54510cc-dec8-47a8-9889-5a0cdf023dbd" containerID="756ffb9a21287927e85ef5ffbd93cd00ab716ae4dd6373d10350ebdfea434ee4" exitCode=0 Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.670443 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" event={"ID":"c54510cc-dec8-47a8-9889-5a0cdf023dbd","Type":"ContainerDied","Data":"756ffb9a21287927e85ef5ffbd93cd00ab716ae4dd6373d10350ebdfea434ee4"} Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.674391 5117 generic.go:358] "Generic (PLEG): container finished" podID="4eddc800-8195-45d6-a456-dc5f98e8a68a" containerID="6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0" exitCode=0 Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.674434 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" event={"ID":"4eddc800-8195-45d6-a456-dc5f98e8a68a","Type":"ContainerDied","Data":"6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0"} Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.674465 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" event={"ID":"4eddc800-8195-45d6-a456-dc5f98e8a68a","Type":"ContainerDied","Data":"19f177f2a3aacd72b5ae0b02eee58d1f7e1f9da73d5375642a6f110e3d923a9b"} Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.674466 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.674541 5117 scope.go:117] "RemoveContainer" containerID="6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.694636 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7"] Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.695370 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4eddc800-8195-45d6-a456-dc5f98e8a68a" containerName="route-controller-manager" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.695387 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eddc800-8195-45d6-a456-dc5f98e8a68a" containerName="route-controller-manager" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.695516 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="4eddc800-8195-45d6-a456-dc5f98e8a68a" containerName="route-controller-manager" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.712365 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7"] Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.712523 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.721293 5117 scope.go:117] "RemoveContainer" containerID="6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0" Jan 30 00:13:49 crc kubenswrapper[5117]: E0130 00:13:49.722291 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0\": container with ID starting with 6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0 not found: ID does not exist" containerID="6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.722330 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0"} err="failed to get container status \"6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0\": rpc error: code = NotFound desc = could not find container \"6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0\": container with ID starting with 6da75f346c5745694c576259c985075ed08269012f8df3bb40c4425c12cf9de0 not found: ID does not exist" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.759555 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-config\") pod \"4eddc800-8195-45d6-a456-dc5f98e8a68a\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.759605 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-client-ca\") pod \"4eddc800-8195-45d6-a456-dc5f98e8a68a\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.759641 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbhg4\" (UniqueName: \"kubernetes.io/projected/4eddc800-8195-45d6-a456-dc5f98e8a68a-kube-api-access-kbhg4\") pod \"4eddc800-8195-45d6-a456-dc5f98e8a68a\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.759766 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4eddc800-8195-45d6-a456-dc5f98e8a68a-tmp\") pod \"4eddc800-8195-45d6-a456-dc5f98e8a68a\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.759814 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eddc800-8195-45d6-a456-dc5f98e8a68a-serving-cert\") pod \"4eddc800-8195-45d6-a456-dc5f98e8a68a\" (UID: \"4eddc800-8195-45d6-a456-dc5f98e8a68a\") " Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.761648 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-client-ca" (OuterVolumeSpecName: "client-ca") pod "4eddc800-8195-45d6-a456-dc5f98e8a68a" (UID: "4eddc800-8195-45d6-a456-dc5f98e8a68a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.761820 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eddc800-8195-45d6-a456-dc5f98e8a68a-tmp" (OuterVolumeSpecName: "tmp") pod "4eddc800-8195-45d6-a456-dc5f98e8a68a" (UID: "4eddc800-8195-45d6-a456-dc5f98e8a68a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.762990 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-config" (OuterVolumeSpecName: "config") pod "4eddc800-8195-45d6-a456-dc5f98e8a68a" (UID: "4eddc800-8195-45d6-a456-dc5f98e8a68a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.768211 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eddc800-8195-45d6-a456-dc5f98e8a68a-kube-api-access-kbhg4" (OuterVolumeSpecName: "kube-api-access-kbhg4") pod "4eddc800-8195-45d6-a456-dc5f98e8a68a" (UID: "4eddc800-8195-45d6-a456-dc5f98e8a68a"). InnerVolumeSpecName "kube-api-access-kbhg4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.769021 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eddc800-8195-45d6-a456-dc5f98e8a68a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4eddc800-8195-45d6-a456-dc5f98e8a68a" (UID: "4eddc800-8195-45d6-a456-dc5f98e8a68a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861151 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c70df82-1014-45fc-b370-e97bb459f603-tmp\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861227 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c70df82-1014-45fc-b370-e97bb459f603-serving-cert\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861259 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c70df82-1014-45fc-b370-e97bb459f603-client-ca\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861320 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c70df82-1014-45fc-b370-e97bb459f603-config\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861345 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4pjp\" (UniqueName: \"kubernetes.io/projected/5c70df82-1014-45fc-b370-e97bb459f603-kube-api-access-n4pjp\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861420 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861434 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eddc800-8195-45d6-a456-dc5f98e8a68a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861448 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kbhg4\" (UniqueName: \"kubernetes.io/projected/4eddc800-8195-45d6-a456-dc5f98e8a68a-kube-api-access-kbhg4\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861458 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4eddc800-8195-45d6-a456-dc5f98e8a68a-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.861467 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eddc800-8195-45d6-a456-dc5f98e8a68a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.962472 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c70df82-1014-45fc-b370-e97bb459f603-config\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.962906 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4pjp\" (UniqueName: \"kubernetes.io/projected/5c70df82-1014-45fc-b370-e97bb459f603-kube-api-access-n4pjp\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.963003 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c70df82-1014-45fc-b370-e97bb459f603-tmp\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.963027 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c70df82-1014-45fc-b370-e97bb459f603-serving-cert\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.963053 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c70df82-1014-45fc-b370-e97bb459f603-client-ca\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.963987 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c70df82-1014-45fc-b370-e97bb459f603-client-ca\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.964153 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c70df82-1014-45fc-b370-e97bb459f603-tmp\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.964253 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c70df82-1014-45fc-b370-e97bb459f603-config\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.967901 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c70df82-1014-45fc-b370-e97bb459f603-serving-cert\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:49 crc kubenswrapper[5117]: I0130 00:13:49.983391 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4pjp\" (UniqueName: \"kubernetes.io/projected/5c70df82-1014-45fc-b370-e97bb459f603-kube-api-access-n4pjp\") pod \"route-controller-manager-758fdbfbfb-997x7\" (UID: \"5c70df82-1014-45fc-b370-e97bb459f603\") " pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.010300 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg"] Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.015193 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c79b5987-f9ldg"] Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.025423 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.031999 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.056749 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k"] Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.057288 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c54510cc-dec8-47a8-9889-5a0cdf023dbd" containerName="controller-manager" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.057307 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54510cc-dec8-47a8-9889-5a0cdf023dbd" containerName="controller-manager" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.057407 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="c54510cc-dec8-47a8-9889-5a0cdf023dbd" containerName="controller-manager" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.071943 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k"] Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.072089 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.165519 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c54510cc-dec8-47a8-9889-5a0cdf023dbd-tmp\") pod \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.165606 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-proxy-ca-bundles\") pod \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.165674 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gf46\" (UniqueName: \"kubernetes.io/projected/c54510cc-dec8-47a8-9889-5a0cdf023dbd-kube-api-access-5gf46\") pod \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.165739 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-client-ca\") pod \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.165828 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-config\") pod \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.165862 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54510cc-dec8-47a8-9889-5a0cdf023dbd-serving-cert\") pod \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\" (UID: \"c54510cc-dec8-47a8-9889-5a0cdf023dbd\") " Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.167969 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c54510cc-dec8-47a8-9889-5a0cdf023dbd-tmp" (OuterVolumeSpecName: "tmp") pod "c54510cc-dec8-47a8-9889-5a0cdf023dbd" (UID: "c54510cc-dec8-47a8-9889-5a0cdf023dbd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.168245 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-client-ca" (OuterVolumeSpecName: "client-ca") pod "c54510cc-dec8-47a8-9889-5a0cdf023dbd" (UID: "c54510cc-dec8-47a8-9889-5a0cdf023dbd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.168242 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c54510cc-dec8-47a8-9889-5a0cdf023dbd" (UID: "c54510cc-dec8-47a8-9889-5a0cdf023dbd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.168609 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-config" (OuterVolumeSpecName: "config") pod "c54510cc-dec8-47a8-9889-5a0cdf023dbd" (UID: "c54510cc-dec8-47a8-9889-5a0cdf023dbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.170489 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54510cc-dec8-47a8-9889-5a0cdf023dbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c54510cc-dec8-47a8-9889-5a0cdf023dbd" (UID: "c54510cc-dec8-47a8-9889-5a0cdf023dbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.171913 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54510cc-dec8-47a8-9889-5a0cdf023dbd-kube-api-access-5gf46" (OuterVolumeSpecName: "kube-api-access-5gf46") pod "c54510cc-dec8-47a8-9889-5a0cdf023dbd" (UID: "c54510cc-dec8-47a8-9889-5a0cdf023dbd"). InnerVolumeSpecName "kube-api-access-5gf46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.241990 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7"] Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267116 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vth2\" (UniqueName: \"kubernetes.io/projected/6e098783-f06a-467c-817d-27e420e206b0-kube-api-access-7vth2\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267188 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-client-ca\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267227 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e098783-f06a-467c-817d-27e420e206b0-tmp\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267295 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-config\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267347 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-proxy-ca-bundles\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267394 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e098783-f06a-467c-817d-27e420e206b0-serving-cert\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267477 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267494 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267505 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54510cc-dec8-47a8-9889-5a0cdf023dbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267518 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c54510cc-dec8-47a8-9889-5a0cdf023dbd-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267529 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c54510cc-dec8-47a8-9889-5a0cdf023dbd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.267541 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5gf46\" (UniqueName: \"kubernetes.io/projected/c54510cc-dec8-47a8-9889-5a0cdf023dbd-kube-api-access-5gf46\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.368278 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-client-ca\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.368329 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e098783-f06a-467c-817d-27e420e206b0-tmp\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.368363 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-config\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.368414 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-proxy-ca-bundles\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.368543 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e098783-f06a-467c-817d-27e420e206b0-serving-cert\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.368648 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vth2\" (UniqueName: \"kubernetes.io/projected/6e098783-f06a-467c-817d-27e420e206b0-kube-api-access-7vth2\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.369075 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e098783-f06a-467c-817d-27e420e206b0-tmp\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.369484 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-client-ca\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.369805 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-proxy-ca-bundles\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.369946 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e098783-f06a-467c-817d-27e420e206b0-config\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.373776 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e098783-f06a-467c-817d-27e420e206b0-serving-cert\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.385911 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vth2\" (UniqueName: \"kubernetes.io/projected/6e098783-f06a-467c-817d-27e420e206b0-kube-api-access-7vth2\") pod \"controller-manager-84dbb4d7c9-7g59k\" (UID: \"6e098783-f06a-467c-817d-27e420e206b0\") " pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.389614 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.574629 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k"] Jan 30 00:13:50 crc kubenswrapper[5117]: W0130 00:13:50.591082 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e098783_f06a_467c_817d_27e420e206b0.slice/crio-e1b21a68bae2b1de7899b0861ea870c15ff206474a612450492b7a2dabd6a1bb WatchSource:0}: Error finding container e1b21a68bae2b1de7899b0861ea870c15ff206474a612450492b7a2dabd6a1bb: Status 404 returned error can't find the container with id e1b21a68bae2b1de7899b0861ea870c15ff206474a612450492b7a2dabd6a1bb Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.682482 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.682492 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb" event={"ID":"c54510cc-dec8-47a8-9889-5a0cdf023dbd","Type":"ContainerDied","Data":"c58e393851e65b912f3a90531b93f128d2b296449ec99d908754e97a3493a7cc"} Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.682576 5117 scope.go:117] "RemoveContainer" containerID="756ffb9a21287927e85ef5ffbd93cd00ab716ae4dd6373d10350ebdfea434ee4" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.691343 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" event={"ID":"5c70df82-1014-45fc-b370-e97bb459f603","Type":"ContainerStarted","Data":"fdcd2c00cd8d872faa72d5927bf4d863ca0d183e944bcff6b1c2807294616fd6"} Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.691395 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" event={"ID":"5c70df82-1014-45fc-b370-e97bb459f603","Type":"ContainerStarted","Data":"cab709f836ea7b1c0fa4ba44fc32a09b314a68f00a8b700cb42b24e565ee15f0"} Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.691597 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.692353 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" event={"ID":"6e098783-f06a-467c-817d-27e420e206b0","Type":"ContainerStarted","Data":"e1b21a68bae2b1de7899b0861ea870c15ff206474a612450492b7a2dabd6a1bb"} Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.701753 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.735095 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-758fdbfbfb-997x7" podStartSLOduration=1.735081088 podStartE2EDuration="1.735081088s" podCreationTimestamp="2026-01-30 00:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:50.723885234 +0000 UTC m=+193.835421134" watchObservedRunningTime="2026-01-30 00:13:50.735081088 +0000 UTC m=+193.846616978" Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.738251 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb"] Jan 30 00:13:50 crc kubenswrapper[5117]: I0130 00:13:50.742767 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f4bb58df6-w5tdb"] Jan 30 00:13:51 crc kubenswrapper[5117]: I0130 00:13:51.044251 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eddc800-8195-45d6-a456-dc5f98e8a68a" path="/var/lib/kubelet/pods/4eddc800-8195-45d6-a456-dc5f98e8a68a/volumes" Jan 30 00:13:51 crc kubenswrapper[5117]: I0130 00:13:51.044819 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54510cc-dec8-47a8-9889-5a0cdf023dbd" path="/var/lib/kubelet/pods/c54510cc-dec8-47a8-9889-5a0cdf023dbd/volumes" Jan 30 00:13:51 crc kubenswrapper[5117]: I0130 00:13:51.706325 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" event={"ID":"6e098783-f06a-467c-817d-27e420e206b0","Type":"ContainerStarted","Data":"003621dfdfc321d789ecc68cc10f37563c8e9e48c2c61180eabd69ac055fc784"} Jan 30 00:13:51 crc kubenswrapper[5117]: I0130 00:13:51.706579 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:51 crc kubenswrapper[5117]: I0130 00:13:51.716062 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:51 crc kubenswrapper[5117]: I0130 00:13:51.725567 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" podStartSLOduration=2.725548771 podStartE2EDuration="2.725548771s" podCreationTimestamp="2026-01-30 00:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:51.723158764 +0000 UTC m=+194.834694674" watchObservedRunningTime="2026-01-30 00:13:51.725548771 +0000 UTC m=+194.837084661" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.702520 5117 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.715219 5117 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.715266 5117 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.715350 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.715814 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275" gracePeriod=15 Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.716277 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e" gracePeriod=15 Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.716276 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356" gracePeriod=15 Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.716336 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9" gracePeriod=15 Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.721452 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732368 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732418 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732438 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732447 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732460 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732467 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732504 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732511 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732521 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732526 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732538 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732544 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732588 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732595 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732615 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732622 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732890 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732903 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732913 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732925 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732983 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.732994 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.733629 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.733650 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.733937 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.733950 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.733972 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.733980 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.734142 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.788111 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: E0130 00:13:53.791005 5117 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813183 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813232 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813275 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813304 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813370 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813439 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813468 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813524 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813556 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.813587 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.914726 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915105 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915132 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915177 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915193 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915226 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915248 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915273 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915299 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915319 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915398 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915884 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.915924 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.916083 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.916118 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.916137 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.916160 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.916183 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.916206 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:53 crc kubenswrapper[5117]: I0130 00:13:53.916226 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.091992 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:54 crc kubenswrapper[5117]: W0130 00:13:54.109044 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-691cd06a4626261fd40c13c85e5641ff6bc75da709521e9698d4ddb2236aaca4 WatchSource:0}: Error finding container 691cd06a4626261fd40c13c85e5641ff6bc75da709521e9698d4ddb2236aaca4: Status 404 returned error can't find the container with id 691cd06a4626261fd40c13c85e5641ff6bc75da709521e9698d4ddb2236aaca4 Jan 30 00:13:54 crc kubenswrapper[5117]: E0130 00:13:54.112049 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59e898caba8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:13:54.11060187 +0000 UTC m=+197.222137760,LastTimestamp:2026-01-30 00:13:54.11060187 +0000 UTC m=+197.222137760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.727245 5117 generic.go:358] "Generic (PLEG): container finished" podID="4847705e-44a0-41dc-85cf-ac809578afe8" containerID="2e2b5b8b110fcb73a195fa2e46ae116203b98c989e26dae5e8dc8ac4556e27c1" exitCode=0 Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.727304 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4847705e-44a0-41dc-85cf-ac809578afe8","Type":"ContainerDied","Data":"2e2b5b8b110fcb73a195fa2e46ae116203b98c989e26dae5e8dc8ac4556e27c1"} Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.728585 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.729018 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d"} Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.729063 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"691cd06a4626261fd40c13c85e5641ff6bc75da709521e9698d4ddb2236aaca4"} Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.729483 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.729863 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5117]: E0130 00:13:54.730092 5117 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.732306 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/0.log" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.732344 5117 generic.go:358] "Generic (PLEG): container finished" podID="6e098783-f06a-467c-817d-27e420e206b0" containerID="003621dfdfc321d789ecc68cc10f37563c8e9e48c2c61180eabd69ac055fc784" exitCode=255 Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.732432 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" event={"ID":"6e098783-f06a-467c-817d-27e420e206b0","Type":"ContainerDied","Data":"003621dfdfc321d789ecc68cc10f37563c8e9e48c2c61180eabd69ac055fc784"} Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.732876 5117 scope.go:117] "RemoveContainer" containerID="003621dfdfc321d789ecc68cc10f37563c8e9e48c2c61180eabd69ac055fc784" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.733390 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.733945 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.735570 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/4.log" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.739793 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.741137 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356" exitCode=0 Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.741164 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e" exitCode=0 Jan 30 00:13:54 crc kubenswrapper[5117]: I0130 00:13:54.741177 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9" exitCode=2 Jan 30 00:13:55 crc kubenswrapper[5117]: I0130 00:13:55.753374 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/0.log" Jan 30 00:13:55 crc kubenswrapper[5117]: I0130 00:13:55.753964 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" event={"ID":"6e098783-f06a-467c-817d-27e420e206b0","Type":"ContainerStarted","Data":"bf7f6a359214dce01efea690e591c7c09e3e83e64817a55734a72626d770ff84"} Jan 30 00:13:55 crc kubenswrapper[5117]: I0130 00:13:55.754755 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:13:55 crc kubenswrapper[5117]: I0130 00:13:55.755314 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:55 crc kubenswrapper[5117]: I0130 00:13:55.756251 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.112712 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.113780 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.114200 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.116705 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/4.log" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.117604 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.118127 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.118474 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.118642 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.118880 5117 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.142892 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-kubelet-dir\") pod \"4847705e-44a0-41dc-85cf-ac809578afe8\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.142933 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4847705e-44a0-41dc-85cf-ac809578afe8-kube-api-access\") pod \"4847705e-44a0-41dc-85cf-ac809578afe8\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.142947 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.142967 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-var-lock\") pod \"4847705e-44a0-41dc-85cf-ac809578afe8\" (UID: \"4847705e-44a0-41dc-85cf-ac809578afe8\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143046 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143051 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4847705e-44a0-41dc-85cf-ac809578afe8" (UID: "4847705e-44a0-41dc-85cf-ac809578afe8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143113 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143178 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143137 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-var-lock" (OuterVolumeSpecName: "var-lock") pod "4847705e-44a0-41dc-85cf-ac809578afe8" (UID: "4847705e-44a0-41dc-85cf-ac809578afe8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143250 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143933 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143276 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.143596 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.144130 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.144460 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.144477 5117 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.144485 5117 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4847705e-44a0-41dc-85cf-ac809578afe8-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.144494 5117 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.144502 5117 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.144509 5117 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.146426 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.150873 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4847705e-44a0-41dc-85cf-ac809578afe8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4847705e-44a0-41dc-85cf-ac809578afe8" (UID: "4847705e-44a0-41dc-85cf-ac809578afe8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.245764 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.245814 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4847705e-44a0-41dc-85cf-ac809578afe8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.757360 5117 patch_prober.go:28] interesting pod/controller-manager-84dbb4d7c9-7g59k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.757457 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" podUID="6e098783-f06a-467c-817d-27e420e206b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.766660 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/4.log" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.768988 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.770338 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275" exitCode=0 Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.770493 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.770527 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.773804 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.773904 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4847705e-44a0-41dc-85cf-ac809578afe8","Type":"ContainerDied","Data":"756f6c491f04892d88b7550a5c9d04d40c231e62c5f5f6f1cd1926f9397f6085"} Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.773934 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756f6c491f04892d88b7550a5c9d04d40c231e62c5f5f6f1cd1926f9397f6085" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.823855 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.824262 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.824684 5117 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.825102 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.825711 5117 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.825991 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.863075 5117 scope.go:117] "RemoveContainer" containerID="ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.881260 5117 scope.go:117] "RemoveContainer" containerID="78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.897598 5117 scope.go:117] "RemoveContainer" containerID="96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.929266 5117 scope.go:117] "RemoveContainer" containerID="da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275" Jan 30 00:13:56 crc kubenswrapper[5117]: I0130 00:13:56.961206 5117 scope.go:117] "RemoveContainer" containerID="8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152" Jan 30 00:13:56 crc kubenswrapper[5117]: E0130 00:13:56.991890 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59e898caba8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:13:54.11060187 +0000 UTC m=+197.222137760,LastTimestamp:2026-01-30 00:13:54.11060187 +0000 UTC m=+197.222137760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.016202 5117 scope.go:117] "RemoveContainer" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:13:57 crc kubenswrapper[5117]: E0130 00:13:57.016684 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c\": container with ID starting with 198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c not found: ID does not exist" containerID="198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.016735 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c"} err="failed to get container status \"198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c\": rpc error: code = NotFound desc = could not find container \"198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c\": container with ID starting with 198d01dabe49de5698fea03b46add1a1dcd3edbad511c02b23207bde1fd7aa7c not found: ID does not exist" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.016754 5117 scope.go:117] "RemoveContainer" containerID="ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356" Jan 30 00:13:57 crc kubenswrapper[5117]: E0130 00:13:57.016965 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\": container with ID starting with ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356 not found: ID does not exist" containerID="ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.016979 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356"} err="failed to get container status \"ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\": rpc error: code = NotFound desc = could not find container \"ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356\": container with ID starting with ea4d6cf13ae74f4db7c2a43bc4930a8e435976043cd6fba93b792f086e0c0356 not found: ID does not exist" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.016990 5117 scope.go:117] "RemoveContainer" containerID="78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e" Jan 30 00:13:57 crc kubenswrapper[5117]: E0130 00:13:57.017133 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\": container with ID starting with 78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e not found: ID does not exist" containerID="78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.017156 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e"} err="failed to get container status \"78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\": rpc error: code = NotFound desc = could not find container \"78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e\": container with ID starting with 78ab00340a6da3d6d451018c6aba794d0056cde2bc803af667093776913adf8e not found: ID does not exist" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.017168 5117 scope.go:117] "RemoveContainer" containerID="96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9" Jan 30 00:13:57 crc kubenswrapper[5117]: E0130 00:13:57.017340 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\": container with ID starting with 96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9 not found: ID does not exist" containerID="96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.017362 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9"} err="failed to get container status \"96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\": rpc error: code = NotFound desc = could not find container \"96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9\": container with ID starting with 96fb605d625da91560067cdeda6360bfd2dbd9646f94460fbe81cd3f6e6610a9 not found: ID does not exist" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.017372 5117 scope.go:117] "RemoveContainer" containerID="da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275" Jan 30 00:13:57 crc kubenswrapper[5117]: E0130 00:13:57.017832 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\": container with ID starting with da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275 not found: ID does not exist" containerID="da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.017852 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275"} err="failed to get container status \"da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\": rpc error: code = NotFound desc = could not find container \"da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275\": container with ID starting with da8ce31717950a8ad197ffa1edef8f15fc7d846bbd6a87b41744184296ec8275 not found: ID does not exist" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.017865 5117 scope.go:117] "RemoveContainer" containerID="8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152" Jan 30 00:13:57 crc kubenswrapper[5117]: E0130 00:13:57.018057 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\": container with ID starting with 8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152 not found: ID does not exist" containerID="8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.018072 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152"} err="failed to get container status \"8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\": rpc error: code = NotFound desc = could not find container \"8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152\": container with ID starting with 8fc0a05b70b78658a3eda3a4206841453430d25b1572324b3c0c8f07725b1152 not found: ID does not exist" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.043438 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.774029 5117 patch_prober.go:28] interesting pod/controller-manager-84dbb4d7c9-7g59k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:13:57 crc kubenswrapper[5117]: I0130 00:13:57.774119 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" podUID="6e098783-f06a-467c-817d-27e420e206b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 00:13:59 crc kubenswrapper[5117]: I0130 00:13:59.042772 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5117]: I0130 00:13:59.043777 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5117]: E0130 00:13:59.943742 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:13:59Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5117]: E0130 00:13:59.944034 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5117]: E0130 00:13:59.944335 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5117]: E0130 00:13:59.944817 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5117]: E0130 00:13:59.945620 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5117]: E0130 00:13:59.945643 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:14:02 crc kubenswrapper[5117]: E0130 00:14:02.729182 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:02 crc kubenswrapper[5117]: E0130 00:14:02.730266 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:02 crc kubenswrapper[5117]: E0130 00:14:02.731033 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:02 crc kubenswrapper[5117]: E0130 00:14:02.731778 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:02 crc kubenswrapper[5117]: E0130 00:14:02.732231 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:02 crc kubenswrapper[5117]: I0130 00:14:02.732269 5117 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 00:14:02 crc kubenswrapper[5117]: E0130 00:14:02.732747 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Jan 30 00:14:02 crc kubenswrapper[5117]: E0130 00:14:02.934525 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Jan 30 00:14:03 crc kubenswrapper[5117]: E0130 00:14:03.336231 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Jan 30 00:14:04 crc kubenswrapper[5117]: E0130 00:14:04.137904 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Jan 30 00:14:04 crc kubenswrapper[5117]: I0130 00:14:04.555058 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:04 crc kubenswrapper[5117]: I0130 00:14:04.555130 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.036632 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.037848 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.038232 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.055765 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.055799 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:05 crc kubenswrapper[5117]: E0130 00:14:05.056485 5117 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.056808 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:05 crc kubenswrapper[5117]: E0130 00:14:05.739480 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.835239 5117 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="2bad37d73af443a32f500164ad1be9881679e67209c250ef89d6d336fb3e9352" exitCode=0 Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.835393 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"2bad37d73af443a32f500164ad1be9881679e67209c250ef89d6d336fb3e9352"} Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.835477 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1c6d9654dc00fc3c31b3306955bdd12e02807f02d96f615a08e7d5a43876f588"} Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.836191 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.836233 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.836920 5117 status_manager.go:895] "Failed to get status for pod" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5117]: E0130 00:14:05.837248 5117 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:05 crc kubenswrapper[5117]: I0130 00:14:05.837924 5117 status_manager.go:895] "Failed to get status for pod" podUID="6e098783-f06a-467c-817d-27e420e206b0" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-84dbb4d7c9-7g59k\": dial tcp 38.102.83.222:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5117]: I0130 00:14:06.845369 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c1b3be0a1e124bee9b88a18a9ed9eda71de8b5d2a446038bd74d2a4875ad189b"} Jan 30 00:14:06 crc kubenswrapper[5117]: I0130 00:14:06.845767 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"582364c561abed4741759809bddf1a7593db716f7dce50b9d45b2108d6c78839"} Jan 30 00:14:06 crc kubenswrapper[5117]: I0130 00:14:06.845782 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d30755c2483f10a105a7b909fc43326bcdd7b7eb376a0fd424a03b2d8c82259a"} Jan 30 00:14:06 crc kubenswrapper[5117]: I0130 00:14:06.847988 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:06 crc kubenswrapper[5117]: I0130 00:14:06.848037 5117 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6" exitCode=1 Jan 30 00:14:06 crc kubenswrapper[5117]: I0130 00:14:06.848132 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6"} Jan 30 00:14:06 crc kubenswrapper[5117]: I0130 00:14:06.848665 5117 scope.go:117] "RemoveContainer" containerID="27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6" Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.776228 5117 patch_prober.go:28] interesting pod/controller-manager-84dbb4d7c9-7g59k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" start-of-body= Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.776310 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" podUID="6e098783-f06a-467c-817d-27e420e206b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.855492 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a4c9473ca42d3155b849deae90db23a3c357b6a807199ef67d566bf85e0a18ac"} Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.855536 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b4abdfb27ca75ce074ff999ff42aae22783761c8b647c828fff26087ed58df0c"} Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.855650 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.855726 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.855746 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.858611 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:07 crc kubenswrapper[5117]: I0130 00:14:07.858728 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"bb292d26bbc84dfb6323c4a549d58f69c87ea9a1e8a2ab5f0e00d09412f628d0"} Jan 30 00:14:10 crc kubenswrapper[5117]: I0130 00:14:10.057679 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:10 crc kubenswrapper[5117]: I0130 00:14:10.057787 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:10 crc kubenswrapper[5117]: I0130 00:14:10.068182 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:11 crc kubenswrapper[5117]: I0130 00:14:11.801063 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.871386 5117 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.871427 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.892348 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"532769ff-9767-48cd-8c80-07c96da318f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:14:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:14:05Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d30755c2483f10a105a7b909fc43326bcdd7b7eb376a0fd424a03b2d8c82259a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:14:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c1b3be0a1e124bee9b88a18a9ed9eda71de8b5d2a446038bd74d2a4875ad189b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:14:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://582364c561abed4741759809bddf1a7593db716f7dce50b9d45b2108d6c78839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:14:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a4c9473ca42d3155b849deae90db23a3c357b6a807199ef67d566bf85e0a18ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4abdfb27ca75ce074ff999ff42aae22783761c8b647c828fff26087ed58df0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:14:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2bad37d73af443a32f500164ad1be9881679e67209c250ef89d6d336fb3e9352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bad37d73af443a32f500164ad1be9881679e67209c250ef89d6d336fb3e9352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"532769ff-9767-48cd-8c80-07c96da318f9\": field is immutable" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.895207 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="708d8d12-5cff-436c-b503-c0b9c2b1d88c" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.929995 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.930061 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.932654 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="708d8d12-5cff-436c-b503-c0b9c2b1d88c" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.935244 5117 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://d30755c2483f10a105a7b909fc43326bcdd7b7eb376a0fd424a03b2d8c82259a" Jan 30 00:14:12 crc kubenswrapper[5117]: I0130 00:14:12.935276 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:13 crc kubenswrapper[5117]: I0130 00:14:13.934026 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:13 crc kubenswrapper[5117]: I0130 00:14:13.934609 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="532769ff-9767-48cd-8c80-07c96da318f9" Jan 30 00:14:13 crc kubenswrapper[5117]: I0130 00:14:13.937535 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="708d8d12-5cff-436c-b503-c0b9c2b1d88c" Jan 30 00:14:14 crc kubenswrapper[5117]: I0130 00:14:14.300530 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:14 crc kubenswrapper[5117]: I0130 00:14:14.300877 5117 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:14 crc kubenswrapper[5117]: I0130 00:14:14.300984 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5117]: I0130 00:14:17.774890 5117 patch_prober.go:28] interesting pod/controller-manager-84dbb4d7c9-7g59k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" start-of-body= Jan 30 00:14:17 crc kubenswrapper[5117]: I0130 00:14:17.775283 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" podUID="6e098783-f06a-467c-817d-27e420e206b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" Jan 30 00:14:20 crc kubenswrapper[5117]: I0130 00:14:20.993140 5117 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:21 crc kubenswrapper[5117]: I0130 00:14:21.578971 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:14:21 crc kubenswrapper[5117]: I0130 00:14:21.846059 5117 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:22 crc kubenswrapper[5117]: I0130 00:14:22.220872 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:14:23 crc kubenswrapper[5117]: I0130 00:14:23.384551 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:23 crc kubenswrapper[5117]: I0130 00:14:23.751680 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:23 crc kubenswrapper[5117]: I0130 00:14:23.873403 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:14:23 crc kubenswrapper[5117]: I0130 00:14:23.989338 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.150130 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.301082 5117 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.301167 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.318016 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.347363 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.427071 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.568734 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.695740 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:24 crc kubenswrapper[5117]: I0130 00:14:24.905751 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:14:25 crc kubenswrapper[5117]: I0130 00:14:25.013736 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:14:25 crc kubenswrapper[5117]: I0130 00:14:25.193266 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:25 crc kubenswrapper[5117]: I0130 00:14:25.417365 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:14:25 crc kubenswrapper[5117]: I0130 00:14:25.814292 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:14:25 crc kubenswrapper[5117]: I0130 00:14:25.963778 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:25 crc kubenswrapper[5117]: I0130 00:14:25.978656 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:14:25 crc kubenswrapper[5117]: I0130 00:14:25.998250 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.014226 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.014933 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/0.log" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.014977 5117 generic.go:358] "Generic (PLEG): container finished" podID="6e098783-f06a-467c-817d-27e420e206b0" containerID="bf7f6a359214dce01efea690e591c7c09e3e83e64817a55734a72626d770ff84" exitCode=255 Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.015123 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" event={"ID":"6e098783-f06a-467c-817d-27e420e206b0","Type":"ContainerDied","Data":"bf7f6a359214dce01efea690e591c7c09e3e83e64817a55734a72626d770ff84"} Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.015226 5117 scope.go:117] "RemoveContainer" containerID="003621dfdfc321d789ecc68cc10f37563c8e9e48c2c61180eabd69ac055fc784" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.016518 5117 scope.go:117] "RemoveContainer" containerID="bf7f6a359214dce01efea690e591c7c09e3e83e64817a55734a72626d770ff84" Jan 30 00:14:26 crc kubenswrapper[5117]: E0130 00:14:26.017335 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-84dbb4d7c9-7g59k_openshift-controller-manager(6e098783-f06a-467c-817d-27e420e206b0)\"" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" podUID="6e098783-f06a-467c-817d-27e420e206b0" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.404820 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.414470 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.501143 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.741509 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.808940 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:14:26 crc kubenswrapper[5117]: I0130 00:14:26.845706 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.025110 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.080023 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.241546 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.265449 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.279264 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.390176 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.441639 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.582195 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.583501 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.617870 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.652333 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.712246 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.853797 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:27 crc kubenswrapper[5117]: I0130 00:14:27.934966 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.078281 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.186772 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.195960 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.206646 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.287588 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.333187 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.435486 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.534874 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.629563 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.631630 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.699004 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.702657 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.742215 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.862514 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:14:28 crc kubenswrapper[5117]: I0130 00:14:28.957519 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.178979 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.189410 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.239931 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.422884 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.433631 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.435256 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.448773 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.500294 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.556626 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.563354 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.607983 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.641403 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.727810 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.895512 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.904417 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:14:29 crc kubenswrapper[5117]: I0130 00:14:29.972508 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.031996 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.045179 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.064391 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.096082 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.130132 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.180273 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.276352 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.302096 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.391087 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.391878 5117 scope.go:117] "RemoveContainer" containerID="bf7f6a359214dce01efea690e591c7c09e3e83e64817a55734a72626d770ff84" Jan 30 00:14:30 crc kubenswrapper[5117]: E0130 00:14:30.393600 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-84dbb4d7c9-7g59k_openshift-controller-manager(6e098783-f06a-467c-817d-27e420e206b0)\"" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" podUID="6e098783-f06a-467c-817d-27e420e206b0" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.465172 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.530808 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.590532 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.709626 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.789216 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.853767 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.855908 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.901752 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.918107 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:14:30 crc kubenswrapper[5117]: I0130 00:14:30.980262 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.023005 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.233859 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.237108 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.240609 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.320642 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.357043 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.462243 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.507510 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.552966 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.556309 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.599638 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.605826 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.794453 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.813812 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.922634 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.942747 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.974073 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.987617 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:14:31 crc kubenswrapper[5117]: I0130 00:14:31.997992 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.053304 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.079260 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.155534 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.281114 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.282413 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.325010 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.415940 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.436625 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.493225 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.669935 5117 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.675421 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.675482 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.682788 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.696503 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.701318 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.703035 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.703011129 podStartE2EDuration="20.703011129s" podCreationTimestamp="2026-01-30 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:32.698396799 +0000 UTC m=+235.809932689" watchObservedRunningTime="2026-01-30 00:14:32.703011129 +0000 UTC m=+235.814547039" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.725552 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.849134 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.860742 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.864318 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.865444 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.897189 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.897929 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.932248 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.932413 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:32 crc kubenswrapper[5117]: I0130 00:14:32.984296 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.266262 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.318110 5117 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.335163 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.388243 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.402460 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.406027 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.501421 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.511574 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.571158 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.571173 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.613912 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.654243 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.688483 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:14:33 crc kubenswrapper[5117]: I0130 00:14:33.974947 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.001234 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.042414 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.051311 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.128984 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.133414 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.185481 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.198505 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.218137 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.300369 5117 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.300524 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.300604 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.301723 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"bb292d26bbc84dfb6323c4a549d58f69c87ea9a1e8a2ab5f0e00d09412f628d0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.301925 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://bb292d26bbc84dfb6323c4a549d58f69c87ea9a1e8a2ab5f0e00d09412f628d0" gracePeriod=30 Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.334172 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.338141 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.416597 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.491154 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.555487 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.555573 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.564661 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.726717 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.784613 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.790275 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.932649 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.946573 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:14:34 crc kubenswrapper[5117]: I0130 00:14:34.964927 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.006534 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.060761 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.121495 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.202329 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.245408 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.269865 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.306743 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.374522 5117 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.374919 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d" gracePeriod=5 Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.381542 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.410953 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.509252 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.514483 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.624342 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.651839 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.709111 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.716123 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.730926 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.748209 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.754858 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.842649 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:14:35 crc kubenswrapper[5117]: I0130 00:14:35.995457 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.015584 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.035963 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.198917 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.231309 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.254248 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.289199 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.365108 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.365851 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.453326 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.512744 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.601505 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.616617 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.725770 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.824752 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:14:36 crc kubenswrapper[5117]: I0130 00:14:36.863238 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.005841 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.007081 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.081346 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.121832 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.147523 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.148181 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.197434 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.303913 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.330247 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.332309 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.394291 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.495727 5117 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.547751 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.554217 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.573641 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.651521 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.705982 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.762113 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.825024 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.849798 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.879878 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.883870 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.958585 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:14:37 crc kubenswrapper[5117]: I0130 00:14:37.988772 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.162506 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.168056 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.200428 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.231144 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.305731 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.341227 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.386678 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.442897 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.565593 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.652087 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.693374 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.710896 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.712226 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.840090 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.855105 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:14:38 crc kubenswrapper[5117]: I0130 00:14:38.907751 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.009842 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.028405 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.167499 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.228228 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.313369 5117 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.387224 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.464042 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.598890 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.730947 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.794803 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:14:39 crc kubenswrapper[5117]: I0130 00:14:39.883272 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:14:40 crc kubenswrapper[5117]: I0130 00:14:40.009742 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:14:40 crc kubenswrapper[5117]: I0130 00:14:40.580927 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:14:40 crc kubenswrapper[5117]: I0130 00:14:40.846843 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:14:40 crc kubenswrapper[5117]: I0130 00:14:40.976970 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:40 crc kubenswrapper[5117]: I0130 00:14:40.977056 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:40 crc kubenswrapper[5117]: I0130 00:14:40.978780 5117 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057384 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057505 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057535 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057562 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057621 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057641 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057671 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057747 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057759 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057945 5117 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057959 5117 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057967 5117 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.057975 5117 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.066464 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.120347 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.120409 5117 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d" exitCode=137 Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.120546 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.120602 5117 scope.go:117] "RemoveContainer" containerID="5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.121941 5117 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.136780 5117 scope.go:117] "RemoveContainer" containerID="5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d" Jan 30 00:14:41 crc kubenswrapper[5117]: E0130 00:14:41.137310 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d\": container with ID starting with 5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d not found: ID does not exist" containerID="5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.137412 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d"} err="failed to get container status \"5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d\": rpc error: code = NotFound desc = could not find container \"5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d\": container with ID starting with 5a1e229547257367537ef511b25c571dc9a930eba961c635408b6f43ee3cca1d not found: ID does not exist" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.138436 5117 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.159106 5117 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:41 crc kubenswrapper[5117]: I0130 00:14:41.369367 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:14:42 crc kubenswrapper[5117]: I0130 00:14:42.038172 5117 scope.go:117] "RemoveContainer" containerID="bf7f6a359214dce01efea690e591c7c09e3e83e64817a55734a72626d770ff84" Jan 30 00:14:43 crc kubenswrapper[5117]: I0130 00:14:43.048955 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 30 00:14:43 crc kubenswrapper[5117]: I0130 00:14:43.136130 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:14:43 crc kubenswrapper[5117]: I0130 00:14:43.136313 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" event={"ID":"6e098783-f06a-467c-817d-27e420e206b0","Type":"ContainerStarted","Data":"1a4cfdc785d682f844ad8d489d7c5c527953fd6c68ec66f125a8db9291ff47d1"} Jan 30 00:14:43 crc kubenswrapper[5117]: I0130 00:14:43.137633 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:14:43 crc kubenswrapper[5117]: I0130 00:14:43.150674 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84dbb4d7c9-7g59k" Jan 30 00:14:43 crc kubenswrapper[5117]: I0130 00:14:43.738638 5117 ???:1] "http: TLS handshake error from 192.168.126.11:47826: no serving certificate available for the kubelet" Jan 30 00:15:02 crc kubenswrapper[5117]: I0130 00:15:02.270831 5117 generic.go:358] "Generic (PLEG): container finished" podID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerID="50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27" exitCode=0 Jan 30 00:15:02 crc kubenswrapper[5117]: I0130 00:15:02.270967 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" event={"ID":"92f91bd9-b566-4246-9ac7-9a591ec358b9","Type":"ContainerDied","Data":"50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27"} Jan 30 00:15:02 crc kubenswrapper[5117]: I0130 00:15:02.271864 5117 scope.go:117] "RemoveContainer" containerID="50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27" Jan 30 00:15:02 crc kubenswrapper[5117]: I0130 00:15:02.562898 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:15:03 crc kubenswrapper[5117]: I0130 00:15:03.282967 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" event={"ID":"92f91bd9-b566-4246-9ac7-9a591ec358b9","Type":"ContainerStarted","Data":"de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8"} Jan 30 00:15:03 crc kubenswrapper[5117]: I0130 00:15:03.283053 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:15:03 crc kubenswrapper[5117]: I0130 00:15:03.285752 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:15:04 crc kubenswrapper[5117]: I0130 00:15:04.555629 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:15:04 crc kubenswrapper[5117]: I0130 00:15:04.556036 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:15:04 crc kubenswrapper[5117]: I0130 00:15:04.556088 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:15:04 crc kubenswrapper[5117]: I0130 00:15:04.556810 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c293bd4ba0e83b7d84f57ec967d7e3e831e0b64cdcb433d2fe983f54587848b"} pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:15:04 crc kubenswrapper[5117]: I0130 00:15:04.556887 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" containerID="cri-o://3c293bd4ba0e83b7d84f57ec967d7e3e831e0b64cdcb433d2fe983f54587848b" gracePeriod=600 Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.295576 5117 generic.go:358] "Generic (PLEG): container finished" podID="3965caad-c581-45b3-88e0-99b4039659c5" containerID="3c293bd4ba0e83b7d84f57ec967d7e3e831e0b64cdcb433d2fe983f54587848b" exitCode=0 Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.295713 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerDied","Data":"3c293bd4ba0e83b7d84f57ec967d7e3e831e0b64cdcb433d2fe983f54587848b"} Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.296223 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"a05881f5d76b5732730f0a57f59c72e0cd420789c5088e30351393724d83be5f"} Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.298764 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.300124 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.300158 5117 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="bb292d26bbc84dfb6323c4a549d58f69c87ea9a1e8a2ab5f0e00d09412f628d0" exitCode=137 Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.300270 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"bb292d26bbc84dfb6323c4a549d58f69c87ea9a1e8a2ab5f0e00d09412f628d0"} Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.300345 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1e521a2adf3f4e0298bf01ac1ae215d5dc0b3c8eaf91c659a2bb13e1ea1445e0"} Jan 30 00:15:05 crc kubenswrapper[5117]: I0130 00:15:05.300377 5117 scope.go:117] "RemoveContainer" containerID="27fe0b57824a2fe686c02f980ae322bc4e326c0d6f873163f16672108c2eaec6" Jan 30 00:15:06 crc kubenswrapper[5117]: I0130 00:15:06.307831 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:11 crc kubenswrapper[5117]: I0130 00:15:11.800401 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:11 crc kubenswrapper[5117]: I0130 00:15:11.907037 5117 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:15:14 crc kubenswrapper[5117]: I0130 00:15:14.299994 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:14 crc kubenswrapper[5117]: I0130 00:15:14.305462 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:14 crc kubenswrapper[5117]: I0130 00:15:14.361754 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.154571 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8"] Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.155610 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" containerName="installer" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.155624 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" containerName="installer" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.155645 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.155650 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.155741 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.155753 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="4847705e-44a0-41dc-85cf-ac809578afe8" containerName="installer" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.208365 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8"] Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.208592 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.212759 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.213317 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.268967 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a77e6ae5-9be9-428a-a096-febdd31d4ee5-config-volume\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.269018 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57p8z\" (UniqueName: \"kubernetes.io/projected/a77e6ae5-9be9-428a-a096-febdd31d4ee5-kube-api-access-57p8z\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.269118 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a77e6ae5-9be9-428a-a096-febdd31d4ee5-secret-volume\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.370295 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a77e6ae5-9be9-428a-a096-febdd31d4ee5-config-volume\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.370357 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57p8z\" (UniqueName: \"kubernetes.io/projected/a77e6ae5-9be9-428a-a096-febdd31d4ee5-kube-api-access-57p8z\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.370390 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a77e6ae5-9be9-428a-a096-febdd31d4ee5-secret-volume\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.371816 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a77e6ae5-9be9-428a-a096-febdd31d4ee5-config-volume\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.377297 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a77e6ae5-9be9-428a-a096-febdd31d4ee5-secret-volume\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.386324 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57p8z\" (UniqueName: \"kubernetes.io/projected/a77e6ae5-9be9-428a-a096-febdd31d4ee5-kube-api-access-57p8z\") pod \"collect-profiles-29495535-hhmv8\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.532797 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:21 crc kubenswrapper[5117]: I0130 00:15:21.938563 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8"] Jan 30 00:15:21 crc kubenswrapper[5117]: W0130 00:15:21.943458 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda77e6ae5_9be9_428a_a096_febdd31d4ee5.slice/crio-e3c8a0af4a6f891c4c1b3adc7a2398f9fdb513c6bb5647b394456d790cb58a73 WatchSource:0}: Error finding container e3c8a0af4a6f891c4c1b3adc7a2398f9fdb513c6bb5647b394456d790cb58a73: Status 404 returned error can't find the container with id e3c8a0af4a6f891c4c1b3adc7a2398f9fdb513c6bb5647b394456d790cb58a73 Jan 30 00:15:22 crc kubenswrapper[5117]: I0130 00:15:22.399949 5117 generic.go:358] "Generic (PLEG): container finished" podID="a77e6ae5-9be9-428a-a096-febdd31d4ee5" containerID="22a230ed078558ab6c49ac39bba9d1740f68f5dc79c6c80c0b47506d01601da2" exitCode=0 Jan 30 00:15:22 crc kubenswrapper[5117]: I0130 00:15:22.400050 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" event={"ID":"a77e6ae5-9be9-428a-a096-febdd31d4ee5","Type":"ContainerDied","Data":"22a230ed078558ab6c49ac39bba9d1740f68f5dc79c6c80c0b47506d01601da2"} Jan 30 00:15:22 crc kubenswrapper[5117]: I0130 00:15:22.400107 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" event={"ID":"a77e6ae5-9be9-428a-a096-febdd31d4ee5","Type":"ContainerStarted","Data":"e3c8a0af4a6f891c4c1b3adc7a2398f9fdb513c6bb5647b394456d790cb58a73"} Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.616830 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.699331 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a77e6ae5-9be9-428a-a096-febdd31d4ee5-secret-volume\") pod \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.699415 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a77e6ae5-9be9-428a-a096-febdd31d4ee5-config-volume\") pod \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.699480 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57p8z\" (UniqueName: \"kubernetes.io/projected/a77e6ae5-9be9-428a-a096-febdd31d4ee5-kube-api-access-57p8z\") pod \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\" (UID: \"a77e6ae5-9be9-428a-a096-febdd31d4ee5\") " Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.700222 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a77e6ae5-9be9-428a-a096-febdd31d4ee5-config-volume" (OuterVolumeSpecName: "config-volume") pod "a77e6ae5-9be9-428a-a096-febdd31d4ee5" (UID: "a77e6ae5-9be9-428a-a096-febdd31d4ee5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.705113 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77e6ae5-9be9-428a-a096-febdd31d4ee5-kube-api-access-57p8z" (OuterVolumeSpecName: "kube-api-access-57p8z") pod "a77e6ae5-9be9-428a-a096-febdd31d4ee5" (UID: "a77e6ae5-9be9-428a-a096-febdd31d4ee5"). InnerVolumeSpecName "kube-api-access-57p8z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.707285 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77e6ae5-9be9-428a-a096-febdd31d4ee5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a77e6ae5-9be9-428a-a096-febdd31d4ee5" (UID: "a77e6ae5-9be9-428a-a096-febdd31d4ee5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.800514 5117 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a77e6ae5-9be9-428a-a096-febdd31d4ee5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.800549 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a77e6ae5-9be9-428a-a096-febdd31d4ee5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:23 crc kubenswrapper[5117]: I0130 00:15:23.800560 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-57p8z\" (UniqueName: \"kubernetes.io/projected/a77e6ae5-9be9-428a-a096-febdd31d4ee5-kube-api-access-57p8z\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:24 crc kubenswrapper[5117]: I0130 00:15:24.411814 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" Jan 30 00:15:24 crc kubenswrapper[5117]: I0130 00:15:24.411905 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-hhmv8" event={"ID":"a77e6ae5-9be9-428a-a096-febdd31d4ee5","Type":"ContainerDied","Data":"e3c8a0af4a6f891c4c1b3adc7a2398f9fdb513c6bb5647b394456d790cb58a73"} Jan 30 00:15:24 crc kubenswrapper[5117]: I0130 00:15:24.411945 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c8a0af4a6f891c4c1b3adc7a2398f9fdb513c6bb5647b394456d790cb58a73" Jan 30 00:15:39 crc kubenswrapper[5117]: I0130 00:15:39.140883 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:15:39 crc kubenswrapper[5117]: I0130 00:15:39.145304 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:15:39 crc kubenswrapper[5117]: I0130 00:15:39.214388 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:39 crc kubenswrapper[5117]: I0130 00:15:39.215192 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:46 crc kubenswrapper[5117]: I0130 00:15:46.710369 5117 ???:1] "http: TLS handshake error from 192.168.126.11:45638: no serving certificate available for the kubelet" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.095964 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26tjl"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.096919 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-26tjl" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="registry-server" containerID="cri-o://b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828" gracePeriod=30 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.136003 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfcw7"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.137057 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nfcw7" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="registry-server" containerID="cri-o://0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246" gracePeriod=30 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.151358 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-f65lp"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.152229 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" containerID="cri-o://de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8" gracePeriod=30 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.155947 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2hcj"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.156800 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x2hcj" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="registry-server" containerID="cri-o://b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905" gracePeriod=30 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.162651 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p98f5"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.162959 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p98f5" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="registry-server" containerID="cri-o://0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98" gracePeriod=30 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.168649 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rzwxb"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.169534 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a77e6ae5-9be9-428a-a096-febdd31d4ee5" containerName="collect-profiles" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.169551 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77e6ae5-9be9-428a-a096-febdd31d4ee5" containerName="collect-profiles" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.169711 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="a77e6ae5-9be9-428a-a096-febdd31d4ee5" containerName="collect-profiles" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.174939 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rzwxb"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.175157 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.273769 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59b8040b-1d85-49e5-8969-3d1fe83b360e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.274151 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvzkc\" (UniqueName: \"kubernetes.io/projected/59b8040b-1d85-49e5-8969-3d1fe83b360e-kube-api-access-hvzkc\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.274191 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59b8040b-1d85-49e5-8969-3d1fe83b360e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.274253 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59b8040b-1d85-49e5-8969-3d1fe83b360e-tmp\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.381394 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59b8040b-1d85-49e5-8969-3d1fe83b360e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.381439 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hvzkc\" (UniqueName: \"kubernetes.io/projected/59b8040b-1d85-49e5-8969-3d1fe83b360e-kube-api-access-hvzkc\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.381463 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59b8040b-1d85-49e5-8969-3d1fe83b360e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.381504 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59b8040b-1d85-49e5-8969-3d1fe83b360e-tmp\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.382138 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59b8040b-1d85-49e5-8969-3d1fe83b360e-tmp\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.383515 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59b8040b-1d85-49e5-8969-3d1fe83b360e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.389598 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59b8040b-1d85-49e5-8969-3d1fe83b360e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.401910 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvzkc\" (UniqueName: \"kubernetes.io/projected/59b8040b-1d85-49e5-8969-3d1fe83b360e-kube-api-access-hvzkc\") pod \"marketplace-operator-547dbd544d-rzwxb\" (UID: \"59b8040b-1d85-49e5-8969-3d1fe83b360e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.514834 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.535556 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.537072 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.595482 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.600918 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.634035 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686204 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-catalog-content\") pod \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686260 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2xrg\" (UniqueName: \"kubernetes.io/projected/96d26479-7c9f-4877-afc4-338863fcdf4d-kube-api-access-h2xrg\") pod \"96d26479-7c9f-4877-afc4-338863fcdf4d\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686504 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-utilities\") pod \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686575 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97ptf\" (UniqueName: \"kubernetes.io/projected/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-kube-api-access-97ptf\") pod \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\" (UID: \"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686736 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-catalog-content\") pod \"96d26479-7c9f-4877-afc4-338863fcdf4d\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686803 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45k5n\" (UniqueName: \"kubernetes.io/projected/fe73bcd6-db8f-4472-a65f-b7858304bc8b-kube-api-access-45k5n\") pod \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686842 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bvfp\" (UniqueName: \"kubernetes.io/projected/92f91bd9-b566-4246-9ac7-9a591ec358b9-kube-api-access-7bvfp\") pod \"92f91bd9-b566-4246-9ac7-9a591ec358b9\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686870 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-trusted-ca\") pod \"92f91bd9-b566-4246-9ac7-9a591ec358b9\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.686947 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-utilities\") pod \"96d26479-7c9f-4877-afc4-338863fcdf4d\" (UID: \"96d26479-7c9f-4877-afc4-338863fcdf4d\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.687027 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-utilities\") pod \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.687045 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-catalog-content\") pod \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\" (UID: \"fe73bcd6-db8f-4472-a65f-b7858304bc8b\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.687093 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-operator-metrics\") pod \"92f91bd9-b566-4246-9ac7-9a591ec358b9\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.687115 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92f91bd9-b566-4246-9ac7-9a591ec358b9-tmp\") pod \"92f91bd9-b566-4246-9ac7-9a591ec358b9\" (UID: \"92f91bd9-b566-4246-9ac7-9a591ec358b9\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.687779 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f91bd9-b566-4246-9ac7-9a591ec358b9-tmp" (OuterVolumeSpecName: "tmp") pod "92f91bd9-b566-4246-9ac7-9a591ec358b9" (UID: "92f91bd9-b566-4246-9ac7-9a591ec358b9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.687795 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "92f91bd9-b566-4246-9ac7-9a591ec358b9" (UID: "92f91bd9-b566-4246-9ac7-9a591ec358b9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.688249 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-utilities" (OuterVolumeSpecName: "utilities") pod "48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" (UID: "48b76cf6-e8bb-4fb2-92bd-4b1718a794f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.688861 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-utilities" (OuterVolumeSpecName: "utilities") pod "96d26479-7c9f-4877-afc4-338863fcdf4d" (UID: "96d26479-7c9f-4877-afc4-338863fcdf4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.689023 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-utilities" (OuterVolumeSpecName: "utilities") pod "fe73bcd6-db8f-4472-a65f-b7858304bc8b" (UID: "fe73bcd6-db8f-4472-a65f-b7858304bc8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.690658 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-kube-api-access-97ptf" (OuterVolumeSpecName: "kube-api-access-97ptf") pod "48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" (UID: "48b76cf6-e8bb-4fb2-92bd-4b1718a794f6"). InnerVolumeSpecName "kube-api-access-97ptf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.690668 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f91bd9-b566-4246-9ac7-9a591ec358b9-kube-api-access-7bvfp" (OuterVolumeSpecName: "kube-api-access-7bvfp") pod "92f91bd9-b566-4246-9ac7-9a591ec358b9" (UID: "92f91bd9-b566-4246-9ac7-9a591ec358b9"). InnerVolumeSpecName "kube-api-access-7bvfp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.691020 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "92f91bd9-b566-4246-9ac7-9a591ec358b9" (UID: "92f91bd9-b566-4246-9ac7-9a591ec358b9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.691379 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe73bcd6-db8f-4472-a65f-b7858304bc8b-kube-api-access-45k5n" (OuterVolumeSpecName: "kube-api-access-45k5n") pod "fe73bcd6-db8f-4472-a65f-b7858304bc8b" (UID: "fe73bcd6-db8f-4472-a65f-b7858304bc8b"). InnerVolumeSpecName "kube-api-access-45k5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.692284 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d26479-7c9f-4877-afc4-338863fcdf4d-kube-api-access-h2xrg" (OuterVolumeSpecName: "kube-api-access-h2xrg") pod "96d26479-7c9f-4877-afc4-338863fcdf4d" (UID: "96d26479-7c9f-4877-afc4-338863fcdf4d"). InnerVolumeSpecName "kube-api-access-h2xrg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.719508 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe73bcd6-db8f-4472-a65f-b7858304bc8b" (UID: "fe73bcd6-db8f-4472-a65f-b7858304bc8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.720451 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96d26479-7c9f-4877-afc4-338863fcdf4d" (UID: "96d26479-7c9f-4877-afc4-338863fcdf4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.731763 5117 generic.go:358] "Generic (PLEG): container finished" podID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerID="0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98" exitCode=0 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.731892 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p98f5" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.731912 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p98f5" event={"ID":"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6","Type":"ContainerDied","Data":"0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.731960 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p98f5" event={"ID":"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6","Type":"ContainerDied","Data":"a6643befd0ee3808a34fe945c9b3bbcb792b8c4912973ea63e1b6c2978e9785b"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.731987 5117 scope.go:117] "RemoveContainer" containerID="0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.735028 5117 generic.go:358] "Generic (PLEG): container finished" podID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerID="b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905" exitCode=0 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.735159 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2hcj" event={"ID":"96d26479-7c9f-4877-afc4-338863fcdf4d","Type":"ContainerDied","Data":"b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.735186 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2hcj" event={"ID":"96d26479-7c9f-4877-afc4-338863fcdf4d","Type":"ContainerDied","Data":"127ee498696d9f6992d9070b9acfaa22fc589904a54689fdc1cf5882e5662952"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.735243 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2hcj" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.737214 5117 generic.go:358] "Generic (PLEG): container finished" podID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerID="de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8" exitCode=0 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.737344 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" event={"ID":"92f91bd9-b566-4246-9ac7-9a591ec358b9","Type":"ContainerDied","Data":"de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.737375 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" event={"ID":"92f91bd9-b566-4246-9ac7-9a591ec358b9","Type":"ContainerDied","Data":"2c6769ef815e932623bd67075ed2fca05942c9c606f1c08be05ea572cd50a9ca"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.737490 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-f65lp" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.740681 5117 generic.go:358] "Generic (PLEG): container finished" podID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerID="b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828" exitCode=0 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.740787 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26tjl" event={"ID":"fe73bcd6-db8f-4472-a65f-b7858304bc8b","Type":"ContainerDied","Data":"b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.740811 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26tjl" event={"ID":"fe73bcd6-db8f-4472-a65f-b7858304bc8b","Type":"ContainerDied","Data":"178a3c82d89cf257a117a46fb20a2c5929de68bd7ccbeb7ca50804c5d1c81fa5"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.740884 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26tjl" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.749368 5117 scope.go:117] "RemoveContainer" containerID="33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.755477 5117 generic.go:358] "Generic (PLEG): container finished" podID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerID="0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246" exitCode=0 Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.755741 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcw7" event={"ID":"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6","Type":"ContainerDied","Data":"0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.755777 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcw7" event={"ID":"48b76cf6-e8bb-4fb2-92bd-4b1718a794f6","Type":"ContainerDied","Data":"62855ab264c18230ad0baa7bb62ae64cc53c48103b381d3d2afcb8a2dd3efc06"} Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.755971 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcw7" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.761950 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" (UID: "48b76cf6-e8bb-4fb2-92bd-4b1718a794f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.772395 5117 scope.go:117] "RemoveContainer" containerID="d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.780323 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26tjl"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.786722 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-26tjl"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.787998 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-utilities\") pod \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788050 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cggfp\" (UniqueName: \"kubernetes.io/projected/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-kube-api-access-cggfp\") pod \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788334 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-catalog-content\") pod \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\" (UID: \"5c584ba7-3c7e-4eb3-ab6e-49155e956ab6\") " Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788923 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-45k5n\" (UniqueName: \"kubernetes.io/projected/fe73bcd6-db8f-4472-a65f-b7858304bc8b-kube-api-access-45k5n\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788945 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bvfp\" (UniqueName: \"kubernetes.io/projected/92f91bd9-b566-4246-9ac7-9a591ec358b9-kube-api-access-7bvfp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788954 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788962 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788971 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788980 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe73bcd6-db8f-4472-a65f-b7858304bc8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788988 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/92f91bd9-b566-4246-9ac7-9a591ec358b9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.788998 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92f91bd9-b566-4246-9ac7-9a591ec358b9-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.789007 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.789016 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2xrg\" (UniqueName: \"kubernetes.io/projected/96d26479-7c9f-4877-afc4-338863fcdf4d-kube-api-access-h2xrg\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.789025 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.789033 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-97ptf\" (UniqueName: \"kubernetes.io/projected/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6-kube-api-access-97ptf\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.789041 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d26479-7c9f-4877-afc4-338863fcdf4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.791085 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2hcj"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.791961 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-kube-api-access-cggfp" (OuterVolumeSpecName: "kube-api-access-cggfp") pod "5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" (UID: "5c584ba7-3c7e-4eb3-ab6e-49155e956ab6"). InnerVolumeSpecName "kube-api-access-cggfp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.794322 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-utilities" (OuterVolumeSpecName: "utilities") pod "5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" (UID: "5c584ba7-3c7e-4eb3-ab6e-49155e956ab6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.796305 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2hcj"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.800971 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-f65lp"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.803969 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-f65lp"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.806789 5117 scope.go:117] "RemoveContainer" containerID="0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.807198 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98\": container with ID starting with 0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98 not found: ID does not exist" containerID="0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.807228 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98"} err="failed to get container status \"0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98\": rpc error: code = NotFound desc = could not find container \"0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98\": container with ID starting with 0c00837d71360cbb95ab2b7a04e696f137b62d6dc5d2272f071ce4555a9e7e98 not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.807249 5117 scope.go:117] "RemoveContainer" containerID="33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.807629 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f\": container with ID starting with 33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f not found: ID does not exist" containerID="33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.807787 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f"} err="failed to get container status \"33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f\": rpc error: code = NotFound desc = could not find container \"33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f\": container with ID starting with 33d775e18cbf034877bb6efa9fe091af3bbcd78c8a25e3d3f3a2360cbdb7c07f not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.807898 5117 scope.go:117] "RemoveContainer" containerID="d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.808274 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e\": container with ID starting with d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e not found: ID does not exist" containerID="d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.808299 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e"} err="failed to get container status \"d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e\": rpc error: code = NotFound desc = could not find container \"d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e\": container with ID starting with d019a18d57252b3c8ba8bf2c7f145262431e10a355039247b2529779cf49324e not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.808313 5117 scope.go:117] "RemoveContainer" containerID="b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.820153 5117 scope.go:117] "RemoveContainer" containerID="9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.833339 5117 scope.go:117] "RemoveContainer" containerID="03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.845805 5117 scope.go:117] "RemoveContainer" containerID="b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.846326 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905\": container with ID starting with b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905 not found: ID does not exist" containerID="b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.846515 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905"} err="failed to get container status \"b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905\": rpc error: code = NotFound desc = could not find container \"b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905\": container with ID starting with b0b8a7036df296507053eacde06cc663aa664151c7b82b4e2954efd90b469905 not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.846618 5117 scope.go:117] "RemoveContainer" containerID="9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.847236 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e\": container with ID starting with 9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e not found: ID does not exist" containerID="9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.847347 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e"} err="failed to get container status \"9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e\": rpc error: code = NotFound desc = could not find container \"9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e\": container with ID starting with 9534d11d2c7c7570f031f710f8064701997f82f6f87a0d1008643943f320e33e not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.847466 5117 scope.go:117] "RemoveContainer" containerID="03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.847801 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67\": container with ID starting with 03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67 not found: ID does not exist" containerID="03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.847954 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67"} err="failed to get container status \"03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67\": rpc error: code = NotFound desc = could not find container \"03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67\": container with ID starting with 03b59bfcd0fd4f4ee216561e53999da0f6ce6fbd9127eee4bf2ac8ab188b4f67 not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.848039 5117 scope.go:117] "RemoveContainer" containerID="de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.867411 5117 scope.go:117] "RemoveContainer" containerID="50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.881317 5117 scope.go:117] "RemoveContainer" containerID="de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.881907 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8\": container with ID starting with de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8 not found: ID does not exist" containerID="de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.881941 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8"} err="failed to get container status \"de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8\": rpc error: code = NotFound desc = could not find container \"de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8\": container with ID starting with de83502a6cb59384f874ead138c03eb6291c353453e459a7725bdcfa021fd4f8 not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.882064 5117 scope.go:117] "RemoveContainer" containerID="50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.882306 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27\": container with ID starting with 50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27 not found: ID does not exist" containerID="50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.882328 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27"} err="failed to get container status \"50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27\": rpc error: code = NotFound desc = could not find container \"50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27\": container with ID starting with 50bef3911878a3d8424291e7e45d6f9efef178d90b8769172af299b762e91d27 not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.882345 5117 scope.go:117] "RemoveContainer" containerID="b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.888190 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" (UID: "5c584ba7-3c7e-4eb3-ab6e-49155e956ab6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.890439 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.890483 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cggfp\" (UniqueName: \"kubernetes.io/projected/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-kube-api-access-cggfp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.890503 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.904324 5117 scope.go:117] "RemoveContainer" containerID="eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.923541 5117 scope.go:117] "RemoveContainer" containerID="13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.942214 5117 scope.go:117] "RemoveContainer" containerID="b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.942876 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828\": container with ID starting with b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828 not found: ID does not exist" containerID="b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.942973 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828"} err="failed to get container status \"b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828\": rpc error: code = NotFound desc = could not find container \"b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828\": container with ID starting with b3fe7c4e8b8526c5f5589743ac79ec514792cb9d3eef8d03838b1e0381ef4828 not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.943016 5117 scope.go:117] "RemoveContainer" containerID="eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.943919 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a\": container with ID starting with eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a not found: ID does not exist" containerID="eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.943962 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a"} err="failed to get container status \"eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a\": rpc error: code = NotFound desc = could not find container \"eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a\": container with ID starting with eaca697c585652c5052b68bd3298ad2637e10c2ffb15908d1a0d88d52b09d51a not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.943991 5117 scope.go:117] "RemoveContainer" containerID="13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476" Jan 30 00:16:10 crc kubenswrapper[5117]: E0130 00:16:10.944508 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476\": container with ID starting with 13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476 not found: ID does not exist" containerID="13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.944551 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476"} err="failed to get container status \"13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476\": rpc error: code = NotFound desc = could not find container \"13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476\": container with ID starting with 13414ffe5514967ce49e72599cf66fd779226cbe2baae9dd68848629d8951476 not found: ID does not exist" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.944576 5117 scope.go:117] "RemoveContainer" containerID="0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.967050 5117 scope.go:117] "RemoveContainer" containerID="e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e" Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.974344 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-rzwxb"] Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.986245 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:16:10 crc kubenswrapper[5117]: I0130 00:16:10.994391 5117 scope.go:117] "RemoveContainer" containerID="bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.008826 5117 scope.go:117] "RemoveContainer" containerID="0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246" Jan 30 00:16:11 crc kubenswrapper[5117]: E0130 00:16:11.009222 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246\": container with ID starting with 0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246 not found: ID does not exist" containerID="0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.009263 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246"} err="failed to get container status \"0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246\": rpc error: code = NotFound desc = could not find container \"0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246\": container with ID starting with 0b3cafda761d878396f91fdf22cc086b8318e772a9998dbb186dd85f20595246 not found: ID does not exist" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.009288 5117 scope.go:117] "RemoveContainer" containerID="e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e" Jan 30 00:16:11 crc kubenswrapper[5117]: E0130 00:16:11.009720 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e\": container with ID starting with e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e not found: ID does not exist" containerID="e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.009750 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e"} err="failed to get container status \"e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e\": rpc error: code = NotFound desc = could not find container \"e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e\": container with ID starting with e9cec2d57ebb6a46bc0a865e1e638a2badae1c4863a1171190e68b212c04f45e not found: ID does not exist" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.009774 5117 scope.go:117] "RemoveContainer" containerID="bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57" Jan 30 00:16:11 crc kubenswrapper[5117]: E0130 00:16:11.010110 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57\": container with ID starting with bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57 not found: ID does not exist" containerID="bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.010157 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57"} err="failed to get container status \"bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57\": rpc error: code = NotFound desc = could not find container \"bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57\": container with ID starting with bab1566beb1c267e3f0edeb67df9087d98790b8b0d1b4d68134fb7d4665b7b57 not found: ID does not exist" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.047108 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" path="/var/lib/kubelet/pods/92f91bd9-b566-4246-9ac7-9a591ec358b9/volumes" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.047745 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" path="/var/lib/kubelet/pods/96d26479-7c9f-4877-afc4-338863fcdf4d/volumes" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.048357 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" path="/var/lib/kubelet/pods/fe73bcd6-db8f-4472-a65f-b7858304bc8b/volumes" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.060679 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p98f5"] Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.063459 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p98f5"] Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.090771 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfcw7"] Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.095611 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nfcw7"] Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.762604 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" event={"ID":"59b8040b-1d85-49e5-8969-3d1fe83b360e","Type":"ContainerStarted","Data":"175aa1b8ea5daba8fc98f2fcc4786f21eb6f3e43115f97c5a88dfce3ffea3150"} Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.763915 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" event={"ID":"59b8040b-1d85-49e5-8969-3d1fe83b360e","Type":"ContainerStarted","Data":"6c9429668a6b4620f7ab890ef8970ab7696675d9ab09c4879c4124bb6a75e015"} Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.764162 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.774759 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" Jan 30 00:16:11 crc kubenswrapper[5117]: I0130 00:16:11.791194 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-rzwxb" podStartSLOduration=1.791163727 podStartE2EDuration="1.791163727s" podCreationTimestamp="2026-01-30 00:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:11.788759069 +0000 UTC m=+334.900294979" watchObservedRunningTime="2026-01-30 00:16:11.791163727 +0000 UTC m=+334.902699617" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.048350 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" path="/var/lib/kubelet/pods/48b76cf6-e8bb-4fb2-92bd-4b1718a794f6/volumes" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.050257 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" path="/var/lib/kubelet/pods/5c584ba7-3c7e-4eb3-ab6e-49155e956ab6/volumes" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096001 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6cq9w"] Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096576 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096600 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096615 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096621 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096629 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096635 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096647 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096656 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096666 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096676 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096814 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096828 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096840 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096847 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096861 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096868 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096880 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096887 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="extract-utilities" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096899 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096907 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096917 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096924 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096933 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096939 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096961 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.096968 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="extract-content" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097055 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097066 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="48b76cf6-e8bb-4fb2-92bd-4b1718a794f6" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097073 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097081 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="96d26479-7c9f-4877-afc4-338863fcdf4d" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097092 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="5c584ba7-3c7e-4eb3-ab6e-49155e956ab6" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097101 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="fe73bcd6-db8f-4472-a65f-b7858304bc8b" containerName="registry-server" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097216 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.097226 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f91bd9-b566-4246-9ac7-9a591ec358b9" containerName="marketplace-operator" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.117958 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cq9w"] Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.118146 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.127813 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.159503 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-catalog-content\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.159730 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-utilities\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.175961 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdzc\" (UniqueName: \"kubernetes.io/projected/d2e964cb-3a46-4bbc-823f-43ad384d844c-kube-api-access-5jdzc\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.276853 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdzc\" (UniqueName: \"kubernetes.io/projected/d2e964cb-3a46-4bbc-823f-43ad384d844c-kube-api-access-5jdzc\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.276919 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-catalog-content\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.276963 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-utilities\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.277366 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-utilities\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.277456 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-catalog-content\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.294972 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7m42x"] Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.303497 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdzc\" (UniqueName: \"kubernetes.io/projected/d2e964cb-3a46-4bbc-823f-43ad384d844c-kube-api-access-5jdzc\") pod \"redhat-marketplace-6cq9w\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.307607 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.308059 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m42x"] Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.310954 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.378104 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlbls\" (UniqueName: \"kubernetes.io/projected/00c7c764-9b8c-4146-a659-38621c5e3c35-kube-api-access-xlbls\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.378202 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c7c764-9b8c-4146-a659-38621c5e3c35-utilities\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.378270 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c7c764-9b8c-4146-a659-38621c5e3c35-catalog-content\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.450933 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.481962 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c7c764-9b8c-4146-a659-38621c5e3c35-utilities\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.482034 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c7c764-9b8c-4146-a659-38621c5e3c35-catalog-content\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.482102 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlbls\" (UniqueName: \"kubernetes.io/projected/00c7c764-9b8c-4146-a659-38621c5e3c35-kube-api-access-xlbls\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.482958 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c7c764-9b8c-4146-a659-38621c5e3c35-utilities\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.483038 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c7c764-9b8c-4146-a659-38621c5e3c35-catalog-content\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.499610 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlbls\" (UniqueName: \"kubernetes.io/projected/00c7c764-9b8c-4146-a659-38621c5e3c35-kube-api-access-xlbls\") pod \"redhat-operators-7m42x\" (UID: \"00c7c764-9b8c-4146-a659-38621c5e3c35\") " pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.638119 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.694419 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cq9w"] Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.786168 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cq9w" event={"ID":"d2e964cb-3a46-4bbc-823f-43ad384d844c","Type":"ContainerStarted","Data":"caf4520087fe10f071c949a918fcc22a0c8bfa8a79259157f120bbef4db12b5b"} Jan 30 00:16:13 crc kubenswrapper[5117]: I0130 00:16:13.829792 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m42x"] Jan 30 00:16:13 crc kubenswrapper[5117]: W0130 00:16:13.837674 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c7c764_9b8c_4146_a659_38621c5e3c35.slice/crio-157d0d6908fe6749601492dc5e6e5206d1990037791a2881d739ab0977db3952 WatchSource:0}: Error finding container 157d0d6908fe6749601492dc5e6e5206d1990037791a2881d739ab0977db3952: Status 404 returned error can't find the container with id 157d0d6908fe6749601492dc5e6e5206d1990037791a2881d739ab0977db3952 Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.407229 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-q4n4v"] Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.411394 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.426984 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-q4n4v"] Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.495650 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.495718 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f18de92-5f10-418e-88b9-54673141e567-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.495739 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f18de92-5f10-418e-88b9-54673141e567-trusted-ca\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.495772 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-registry-tls\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.495798 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnmpd\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-kube-api-access-lnmpd\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.495932 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f18de92-5f10-418e-88b9-54673141e567-registry-certificates\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.496056 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-bound-sa-token\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.496241 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f18de92-5f10-418e-88b9-54673141e567-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.525848 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.597834 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-registry-tls\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.597914 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnmpd\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-kube-api-access-lnmpd\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.598328 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f18de92-5f10-418e-88b9-54673141e567-registry-certificates\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.598368 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-bound-sa-token\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.598396 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f18de92-5f10-418e-88b9-54673141e567-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.598438 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f18de92-5f10-418e-88b9-54673141e567-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.598456 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f18de92-5f10-418e-88b9-54673141e567-trusted-ca\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.599710 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f18de92-5f10-418e-88b9-54673141e567-trusted-ca\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.600321 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f18de92-5f10-418e-88b9-54673141e567-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.600683 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f18de92-5f10-418e-88b9-54673141e567-registry-certificates\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.606927 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f18de92-5f10-418e-88b9-54673141e567-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.608510 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-registry-tls\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.623979 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-bound-sa-token\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.628013 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnmpd\" (UniqueName: \"kubernetes.io/projected/9f18de92-5f10-418e-88b9-54673141e567-kube-api-access-lnmpd\") pod \"image-registry-5d9d95bf5b-q4n4v\" (UID: \"9f18de92-5f10-418e-88b9-54673141e567\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.726820 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.795255 5117 generic.go:358] "Generic (PLEG): container finished" podID="00c7c764-9b8c-4146-a659-38621c5e3c35" containerID="e0f47cb1fe25ce0ff586107f062bb43ba030209e5c4a98e7e6ffba65ba9200e2" exitCode=0 Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.795404 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m42x" event={"ID":"00c7c764-9b8c-4146-a659-38621c5e3c35","Type":"ContainerDied","Data":"e0f47cb1fe25ce0ff586107f062bb43ba030209e5c4a98e7e6ffba65ba9200e2"} Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.795443 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m42x" event={"ID":"00c7c764-9b8c-4146-a659-38621c5e3c35","Type":"ContainerStarted","Data":"157d0d6908fe6749601492dc5e6e5206d1990037791a2881d739ab0977db3952"} Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.801626 5117 generic.go:358] "Generic (PLEG): container finished" podID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerID="a68285855e082e326befd33b629a79839b8072fd4f675a8806fd748b417ad06e" exitCode=0 Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.801752 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cq9w" event={"ID":"d2e964cb-3a46-4bbc-823f-43ad384d844c","Type":"ContainerDied","Data":"a68285855e082e326befd33b629a79839b8072fd4f675a8806fd748b417ad06e"} Jan 30 00:16:14 crc kubenswrapper[5117]: I0130 00:16:14.920956 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-q4n4v"] Jan 30 00:16:14 crc kubenswrapper[5117]: W0130 00:16:14.928837 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f18de92_5f10_418e_88b9_54673141e567.slice/crio-fc1d25bef08dfc4fc3e73f254ba099ff92a364b92c09878b6463515d65e73f56 WatchSource:0}: Error finding container fc1d25bef08dfc4fc3e73f254ba099ff92a364b92c09878b6463515d65e73f56: Status 404 returned error can't find the container with id fc1d25bef08dfc4fc3e73f254ba099ff92a364b92c09878b6463515d65e73f56 Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.697521 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xfchs"] Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.718017 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xfchs"] Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.718202 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.755891 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.808974 5117 generic.go:358] "Generic (PLEG): container finished" podID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerID="bd60b939c84782959ee32d09c79ed496c3aa1886125d8e8a48b88fb357df0855" exitCode=0 Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.809028 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cq9w" event={"ID":"d2e964cb-3a46-4bbc-823f-43ad384d844c","Type":"ContainerDied","Data":"bd60b939c84782959ee32d09c79ed496c3aa1886125d8e8a48b88fb357df0855"} Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.812881 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" event={"ID":"9f18de92-5f10-418e-88b9-54673141e567","Type":"ContainerStarted","Data":"f0c965ad8e3a5346d872454df0843b67bc5c7a806878e38e59de17753c99a9c1"} Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.812932 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" event={"ID":"9f18de92-5f10-418e-88b9-54673141e567","Type":"ContainerStarted","Data":"fc1d25bef08dfc4fc3e73f254ba099ff92a364b92c09878b6463515d65e73f56"} Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.813038 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.814642 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m42x" event={"ID":"00c7c764-9b8c-4146-a659-38621c5e3c35","Type":"ContainerStarted","Data":"16802016abcbac8d0e3a3bd9d6843d258aab5559b4859f2909dec5e12a82827d"} Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.816826 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da906ef-6bfe-4595-b492-fc192b73118e-catalog-content\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.816890 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da906ef-6bfe-4595-b492-fc192b73118e-utilities\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.816939 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f755d\" (UniqueName: \"kubernetes.io/projected/4da906ef-6bfe-4595-b492-fc192b73118e-kube-api-access-f755d\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.839880 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" podStartSLOduration=1.839857904 podStartE2EDuration="1.839857904s" podCreationTimestamp="2026-01-30 00:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:15.837916039 +0000 UTC m=+338.949451929" watchObservedRunningTime="2026-01-30 00:16:15.839857904 +0000 UTC m=+338.951393814" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.897740 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6d7bs"] Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.906508 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.909772 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.912899 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d7bs"] Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.918234 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da906ef-6bfe-4595-b492-fc192b73118e-utilities\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.918377 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f755d\" (UniqueName: \"kubernetes.io/projected/4da906ef-6bfe-4595-b492-fc192b73118e-kube-api-access-f755d\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.918427 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da906ef-6bfe-4595-b492-fc192b73118e-catalog-content\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.918881 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da906ef-6bfe-4595-b492-fc192b73118e-catalog-content\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.919607 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da906ef-6bfe-4595-b492-fc192b73118e-utilities\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:15 crc kubenswrapper[5117]: I0130 00:16:15.938877 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f755d\" (UniqueName: \"kubernetes.io/projected/4da906ef-6bfe-4595-b492-fc192b73118e-kube-api-access-f755d\") pod \"certified-operators-xfchs\" (UID: \"4da906ef-6bfe-4595-b492-fc192b73118e\") " pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.019554 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841f2982-7c20-4202-a7ca-633883c148b2-utilities\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.019617 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841f2982-7c20-4202-a7ca-633883c148b2-catalog-content\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.019666 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh2br\" (UniqueName: \"kubernetes.io/projected/841f2982-7c20-4202-a7ca-633883c148b2-kube-api-access-sh2br\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.068673 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.121304 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841f2982-7c20-4202-a7ca-633883c148b2-catalog-content\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.121369 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sh2br\" (UniqueName: \"kubernetes.io/projected/841f2982-7c20-4202-a7ca-633883c148b2-kube-api-access-sh2br\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.121468 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841f2982-7c20-4202-a7ca-633883c148b2-utilities\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.121969 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841f2982-7c20-4202-a7ca-633883c148b2-utilities\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.122672 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841f2982-7c20-4202-a7ca-633883c148b2-catalog-content\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.144058 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh2br\" (UniqueName: \"kubernetes.io/projected/841f2982-7c20-4202-a7ca-633883c148b2-kube-api-access-sh2br\") pod \"community-operators-6d7bs\" (UID: \"841f2982-7c20-4202-a7ca-633883c148b2\") " pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.227188 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.310526 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xfchs"] Jan 30 00:16:16 crc kubenswrapper[5117]: W0130 00:16:16.320933 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4da906ef_6bfe_4595_b492_fc192b73118e.slice/crio-3c97addc16c46bdf3f7abcf6812cba24e0d362f944c7c78176aa1bc592dc8f9c WatchSource:0}: Error finding container 3c97addc16c46bdf3f7abcf6812cba24e0d362f944c7c78176aa1bc592dc8f9c: Status 404 returned error can't find the container with id 3c97addc16c46bdf3f7abcf6812cba24e0d362f944c7c78176aa1bc592dc8f9c Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.642511 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d7bs"] Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.821724 5117 generic.go:358] "Generic (PLEG): container finished" podID="4da906ef-6bfe-4595-b492-fc192b73118e" containerID="d3da19c5164bdeef4f5696c8f2e6de6a8030b5bdd87173eb67aca40235c55bdd" exitCode=0 Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.821773 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfchs" event={"ID":"4da906ef-6bfe-4595-b492-fc192b73118e","Type":"ContainerDied","Data":"d3da19c5164bdeef4f5696c8f2e6de6a8030b5bdd87173eb67aca40235c55bdd"} Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.821825 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfchs" event={"ID":"4da906ef-6bfe-4595-b492-fc192b73118e","Type":"ContainerStarted","Data":"3c97addc16c46bdf3f7abcf6812cba24e0d362f944c7c78176aa1bc592dc8f9c"} Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.826835 5117 generic.go:358] "Generic (PLEG): container finished" podID="00c7c764-9b8c-4146-a659-38621c5e3c35" containerID="16802016abcbac8d0e3a3bd9d6843d258aab5559b4859f2909dec5e12a82827d" exitCode=0 Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.826947 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m42x" event={"ID":"00c7c764-9b8c-4146-a659-38621c5e3c35","Type":"ContainerDied","Data":"16802016abcbac8d0e3a3bd9d6843d258aab5559b4859f2909dec5e12a82827d"} Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.831441 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cq9w" event={"ID":"d2e964cb-3a46-4bbc-823f-43ad384d844c","Type":"ContainerStarted","Data":"fe79c5f944b4aab8eafddc127eb78f7a95d82020cbe8d91658a5e7592ab7a7a6"} Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.854292 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d7bs" event={"ID":"841f2982-7c20-4202-a7ca-633883c148b2","Type":"ContainerStarted","Data":"86187cfde5676e58e33a1a134e501b5344c70f36cb18e9fecc100c8a20aab4ca"} Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.854386 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d7bs" event={"ID":"841f2982-7c20-4202-a7ca-633883c148b2","Type":"ContainerStarted","Data":"e282f3c2a2eadc1a9ef0e0789583d4fbb2f26a40ef718419bc3ed9b4bedadea2"} Jan 30 00:16:16 crc kubenswrapper[5117]: I0130 00:16:16.902295 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6cq9w" podStartSLOduration=3.260411599 podStartE2EDuration="3.902270746s" podCreationTimestamp="2026-01-30 00:16:13 +0000 UTC" firstStartedPulling="2026-01-30 00:16:14.802753153 +0000 UTC m=+337.914289043" lastFinishedPulling="2026-01-30 00:16:15.4446123 +0000 UTC m=+338.556148190" observedRunningTime="2026-01-30 00:16:16.896482994 +0000 UTC m=+340.008018884" watchObservedRunningTime="2026-01-30 00:16:16.902270746 +0000 UTC m=+340.013806636" Jan 30 00:16:17 crc kubenswrapper[5117]: I0130 00:16:17.850941 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m42x" event={"ID":"00c7c764-9b8c-4146-a659-38621c5e3c35","Type":"ContainerStarted","Data":"7ac4f32171146ba8a07063348786401f5b5ee6641daef143fe3dbe784e9d70b0"} Jan 30 00:16:17 crc kubenswrapper[5117]: I0130 00:16:17.853145 5117 generic.go:358] "Generic (PLEG): container finished" podID="841f2982-7c20-4202-a7ca-633883c148b2" containerID="86187cfde5676e58e33a1a134e501b5344c70f36cb18e9fecc100c8a20aab4ca" exitCode=0 Jan 30 00:16:17 crc kubenswrapper[5117]: I0130 00:16:17.853312 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d7bs" event={"ID":"841f2982-7c20-4202-a7ca-633883c148b2","Type":"ContainerDied","Data":"86187cfde5676e58e33a1a134e501b5344c70f36cb18e9fecc100c8a20aab4ca"} Jan 30 00:16:17 crc kubenswrapper[5117]: I0130 00:16:17.856871 5117 generic.go:358] "Generic (PLEG): container finished" podID="4da906ef-6bfe-4595-b492-fc192b73118e" containerID="7924258b00a5a2b49ca6ffc535177d3eef99cd927feb2e221e07e81e07c96da1" exitCode=0 Jan 30 00:16:17 crc kubenswrapper[5117]: I0130 00:16:17.857043 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfchs" event={"ID":"4da906ef-6bfe-4595-b492-fc192b73118e","Type":"ContainerDied","Data":"7924258b00a5a2b49ca6ffc535177d3eef99cd927feb2e221e07e81e07c96da1"} Jan 30 00:16:17 crc kubenswrapper[5117]: I0130 00:16:17.875354 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7m42x" podStartSLOduration=4.262635733 podStartE2EDuration="4.875339431s" podCreationTimestamp="2026-01-30 00:16:13 +0000 UTC" firstStartedPulling="2026-01-30 00:16:14.796162098 +0000 UTC m=+337.907697988" lastFinishedPulling="2026-01-30 00:16:15.408865796 +0000 UTC m=+338.520401686" observedRunningTime="2026-01-30 00:16:17.871243326 +0000 UTC m=+340.982779216" watchObservedRunningTime="2026-01-30 00:16:17.875339431 +0000 UTC m=+340.986875321" Jan 30 00:16:18 crc kubenswrapper[5117]: I0130 00:16:18.866097 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d7bs" event={"ID":"841f2982-7c20-4202-a7ca-633883c148b2","Type":"ContainerStarted","Data":"faac9d5c84b3de5c2223ffa247e188fb1ec62294cbe2cc4312fe2b66fed13f54"} Jan 30 00:16:18 crc kubenswrapper[5117]: I0130 00:16:18.868237 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfchs" event={"ID":"4da906ef-6bfe-4595-b492-fc192b73118e","Type":"ContainerStarted","Data":"0aa97d92b20b3cd5eaf6d911c01dfcf087011c94550c44bbea3785d3219d88a0"} Jan 30 00:16:18 crc kubenswrapper[5117]: I0130 00:16:18.910370 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xfchs" podStartSLOduration=3.231569591 podStartE2EDuration="3.910348164s" podCreationTimestamp="2026-01-30 00:16:15 +0000 UTC" firstStartedPulling="2026-01-30 00:16:16.822714553 +0000 UTC m=+339.934250433" lastFinishedPulling="2026-01-30 00:16:17.501493096 +0000 UTC m=+340.613029006" observedRunningTime="2026-01-30 00:16:18.90523371 +0000 UTC m=+342.016769590" watchObservedRunningTime="2026-01-30 00:16:18.910348164 +0000 UTC m=+342.021884064" Jan 30 00:16:19 crc kubenswrapper[5117]: I0130 00:16:19.886865 5117 generic.go:358] "Generic (PLEG): container finished" podID="841f2982-7c20-4202-a7ca-633883c148b2" containerID="faac9d5c84b3de5c2223ffa247e188fb1ec62294cbe2cc4312fe2b66fed13f54" exitCode=0 Jan 30 00:16:19 crc kubenswrapper[5117]: I0130 00:16:19.889224 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d7bs" event={"ID":"841f2982-7c20-4202-a7ca-633883c148b2","Type":"ContainerDied","Data":"faac9d5c84b3de5c2223ffa247e188fb1ec62294cbe2cc4312fe2b66fed13f54"} Jan 30 00:16:20 crc kubenswrapper[5117]: I0130 00:16:20.893357 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d7bs" event={"ID":"841f2982-7c20-4202-a7ca-633883c148b2","Type":"ContainerStarted","Data":"f6a5b7e7fd264dc85eff2019fde63412202ed6e11619fa6f0b5de21076eea4f8"} Jan 30 00:16:20 crc kubenswrapper[5117]: I0130 00:16:20.913258 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6d7bs" podStartSLOduration=5.240670756 podStartE2EDuration="5.913239925s" podCreationTimestamp="2026-01-30 00:16:15 +0000 UTC" firstStartedPulling="2026-01-30 00:16:17.854075094 +0000 UTC m=+340.965610974" lastFinishedPulling="2026-01-30 00:16:18.526644253 +0000 UTC m=+341.638180143" observedRunningTime="2026-01-30 00:16:20.910821697 +0000 UTC m=+344.022357587" watchObservedRunningTime="2026-01-30 00:16:20.913239925 +0000 UTC m=+344.024775815" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.451579 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.451951 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.505283 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.638785 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.638996 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.683637 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.956577 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:16:23 crc kubenswrapper[5117]: I0130 00:16:23.974581 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7m42x" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.069405 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.071472 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.115543 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.228224 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.228267 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.268519 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.980226 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6d7bs" Jan 30 00:16:26 crc kubenswrapper[5117]: I0130 00:16:26.990497 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xfchs" Jan 30 00:16:36 crc kubenswrapper[5117]: I0130 00:16:36.848957 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-q4n4v" Jan 30 00:16:36 crc kubenswrapper[5117]: I0130 00:16:36.926772 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ndwrw"] Jan 30 00:17:01 crc kubenswrapper[5117]: I0130 00:17:01.972564 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" podUID="9e140562-67a0-4a82-bfab-c678258c734e" containerName="registry" containerID="cri-o://581534ae07887e61de6c967a181171dd3f79c7ac639636656b7f8480a6fa3541" gracePeriod=30 Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.165828 5117 generic.go:358] "Generic (PLEG): container finished" podID="9e140562-67a0-4a82-bfab-c678258c734e" containerID="581534ae07887e61de6c967a181171dd3f79c7ac639636656b7f8480a6fa3541" exitCode=0 Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.165937 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" event={"ID":"9e140562-67a0-4a82-bfab-c678258c734e","Type":"ContainerDied","Data":"581534ae07887e61de6c967a181171dd3f79c7ac639636656b7f8480a6fa3541"} Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.396467 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.486786 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.486975 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e140562-67a0-4a82-bfab-c678258c734e-ca-trust-extracted\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.487048 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-trusted-ca\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.487075 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-bound-sa-token\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.487101 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e140562-67a0-4a82-bfab-c678258c734e-installation-pull-secrets\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.487152 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw89x\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-kube-api-access-lw89x\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.487173 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-registry-certificates\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.487298 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-registry-tls\") pod \"9e140562-67a0-4a82-bfab-c678258c734e\" (UID: \"9e140562-67a0-4a82-bfab-c678258c734e\") " Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.489592 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.490854 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.496629 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.496860 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e140562-67a0-4a82-bfab-c678258c734e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.497035 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.497494 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-kube-api-access-lw89x" (OuterVolumeSpecName: "kube-api-access-lw89x") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "kube-api-access-lw89x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.504109 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.509547 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e140562-67a0-4a82-bfab-c678258c734e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e140562-67a0-4a82-bfab-c678258c734e" (UID: "9e140562-67a0-4a82-bfab-c678258c734e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.588999 5117 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.589047 5117 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e140562-67a0-4a82-bfab-c678258c734e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.589061 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.589072 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.589088 5117 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e140562-67a0-4a82-bfab-c678258c734e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.589103 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lw89x\" (UniqueName: \"kubernetes.io/projected/9e140562-67a0-4a82-bfab-c678258c734e-kube-api-access-lw89x\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5117]: I0130 00:17:02.589117 5117 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e140562-67a0-4a82-bfab-c678258c734e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:03 crc kubenswrapper[5117]: I0130 00:17:03.176146 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" event={"ID":"9e140562-67a0-4a82-bfab-c678258c734e","Type":"ContainerDied","Data":"e89c60a11c9d29bdf29e935d883c8d9b80212236b8832dd1985c17a25e3d67bf"} Jan 30 00:17:03 crc kubenswrapper[5117]: I0130 00:17:03.176235 5117 scope.go:117] "RemoveContainer" containerID="581534ae07887e61de6c967a181171dd3f79c7ac639636656b7f8480a6fa3541" Jan 30 00:17:03 crc kubenswrapper[5117]: I0130 00:17:03.176165 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-ndwrw" Jan 30 00:17:03 crc kubenswrapper[5117]: I0130 00:17:03.211338 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ndwrw"] Jan 30 00:17:03 crc kubenswrapper[5117]: I0130 00:17:03.217715 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-ndwrw"] Jan 30 00:17:04 crc kubenswrapper[5117]: I0130 00:17:04.555537 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:04 crc kubenswrapper[5117]: I0130 00:17:04.556067 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:05 crc kubenswrapper[5117]: I0130 00:17:05.048931 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e140562-67a0-4a82-bfab-c678258c734e" path="/var/lib/kubelet/pods/9e140562-67a0-4a82-bfab-c678258c734e/volumes" Jan 30 00:17:34 crc kubenswrapper[5117]: I0130 00:17:34.555646 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:34 crc kubenswrapper[5117]: I0130 00:17:34.557762 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.151153 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495538-6k29d"] Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.152802 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e140562-67a0-4a82-bfab-c678258c734e" containerName="registry" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.152826 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e140562-67a0-4a82-bfab-c678258c734e" containerName="registry" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.153014 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e140562-67a0-4a82-bfab-c678258c734e" containerName="registry" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.167053 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-6k29d"] Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.167241 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-6k29d" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.171139 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.172338 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.195499 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krcq2\" (UniqueName: \"kubernetes.io/projected/cd0ab2d0-bc28-4ced-a7a8-1bd939549e46-kube-api-access-krcq2\") pod \"auto-csr-approver-29495538-6k29d\" (UID: \"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46\") " pod="openshift-infra/auto-csr-approver-29495538-6k29d" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.297243 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krcq2\" (UniqueName: \"kubernetes.io/projected/cd0ab2d0-bc28-4ced-a7a8-1bd939549e46-kube-api-access-krcq2\") pod \"auto-csr-approver-29495538-6k29d\" (UID: \"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46\") " pod="openshift-infra/auto-csr-approver-29495538-6k29d" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.324726 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krcq2\" (UniqueName: \"kubernetes.io/projected/cd0ab2d0-bc28-4ced-a7a8-1bd939549e46-kube-api-access-krcq2\") pod \"auto-csr-approver-29495538-6k29d\" (UID: \"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46\") " pod="openshift-infra/auto-csr-approver-29495538-6k29d" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.520041 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-6k29d" Jan 30 00:18:00 crc kubenswrapper[5117]: I0130 00:18:00.758665 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-6k29d"] Jan 30 00:18:01 crc kubenswrapper[5117]: I0130 00:18:01.571167 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495538-6k29d" event={"ID":"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46","Type":"ContainerStarted","Data":"be64ac0717955b5a147cf942b4ccfc7dbdd10500457bac3074bdbbf3f0f2a13c"} Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.194467 5117 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-bl25l" Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.215283 5117 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-bl25l" Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.555331 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.555455 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.555515 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.556500 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a05881f5d76b5732730f0a57f59c72e0cd420789c5088e30351393724d83be5f"} pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.556640 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" containerID="cri-o://a05881f5d76b5732730f0a57f59c72e0cd420789c5088e30351393724d83be5f" gracePeriod=600 Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.593002 5117 generic.go:358] "Generic (PLEG): container finished" podID="cd0ab2d0-bc28-4ced-a7a8-1bd939549e46" containerID="b13426a65e6aaa4e64851c834bb5a6bd91e87c56207f5535845d207bdadde86a" exitCode=0 Jan 30 00:18:04 crc kubenswrapper[5117]: I0130 00:18:04.593060 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495538-6k29d" event={"ID":"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46","Type":"ContainerDied","Data":"b13426a65e6aaa4e64851c834bb5a6bd91e87c56207f5535845d207bdadde86a"} Jan 30 00:18:05 crc kubenswrapper[5117]: I0130 00:18:05.216620 5117 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:13:04 +0000 UTC" deadline="2026-02-23 00:05:00.689058013 +0000 UTC" Jan 30 00:18:05 crc kubenswrapper[5117]: I0130 00:18:05.217090 5117 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="575h46m55.47197459s" Jan 30 00:18:05 crc kubenswrapper[5117]: I0130 00:18:05.605447 5117 generic.go:358] "Generic (PLEG): container finished" podID="3965caad-c581-45b3-88e0-99b4039659c5" containerID="a05881f5d76b5732730f0a57f59c72e0cd420789c5088e30351393724d83be5f" exitCode=0 Jan 30 00:18:05 crc kubenswrapper[5117]: I0130 00:18:05.605591 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerDied","Data":"a05881f5d76b5732730f0a57f59c72e0cd420789c5088e30351393724d83be5f"} Jan 30 00:18:05 crc kubenswrapper[5117]: I0130 00:18:05.605752 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"8fa20a680f842b91be2f212674ae09218d15dca3e62b236ca705f6ad0d0dc01e"} Jan 30 00:18:05 crc kubenswrapper[5117]: I0130 00:18:05.605814 5117 scope.go:117] "RemoveContainer" containerID="3c293bd4ba0e83b7d84f57ec967d7e3e831e0b64cdcb433d2fe983f54587848b" Jan 30 00:18:05 crc kubenswrapper[5117]: I0130 00:18:05.957480 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-6k29d" Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.102505 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krcq2\" (UniqueName: \"kubernetes.io/projected/cd0ab2d0-bc28-4ced-a7a8-1bd939549e46-kube-api-access-krcq2\") pod \"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46\" (UID: \"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46\") " Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.109830 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0ab2d0-bc28-4ced-a7a8-1bd939549e46-kube-api-access-krcq2" (OuterVolumeSpecName: "kube-api-access-krcq2") pod "cd0ab2d0-bc28-4ced-a7a8-1bd939549e46" (UID: "cd0ab2d0-bc28-4ced-a7a8-1bd939549e46"). InnerVolumeSpecName "kube-api-access-krcq2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.205077 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-krcq2\" (UniqueName: \"kubernetes.io/projected/cd0ab2d0-bc28-4ced-a7a8-1bd939549e46-kube-api-access-krcq2\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.217587 5117 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:13:04 +0000 UTC" deadline="2026-02-24 19:16:39.432939961 +0000 UTC" Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.217640 5117 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="618h58m33.215305482s" Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.615253 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495538-6k29d" event={"ID":"cd0ab2d0-bc28-4ced-a7a8-1bd939549e46","Type":"ContainerDied","Data":"be64ac0717955b5a147cf942b4ccfc7dbdd10500457bac3074bdbbf3f0f2a13c"} Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.615312 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be64ac0717955b5a147cf942b4ccfc7dbdd10500457bac3074bdbbf3f0f2a13c" Jan 30 00:18:06 crc kubenswrapper[5117]: I0130 00:18:06.615327 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495538-6k29d" Jan 30 00:18:39 crc kubenswrapper[5117]: I0130 00:18:39.323842 5117 scope.go:117] "RemoveContainer" containerID="6e653134294a876171722b23a25dca9f7839fa891b824b3b44f5a10bade30a4c" Jan 30 00:18:39 crc kubenswrapper[5117]: I0130 00:18:39.355640 5117 scope.go:117] "RemoveContainer" containerID="c85453f8953d85a9a144261d97e1d225b4489bb64808095ab3814b05e68adf95" Jan 30 00:19:39 crc kubenswrapper[5117]: I0130 00:19:39.398911 5117 scope.go:117] "RemoveContainer" containerID="0cba7766ef032abb026fc044d347a6170603f7ceb86ec1519146b60460136121" Jan 30 00:19:39 crc kubenswrapper[5117]: I0130 00:19:39.417882 5117 scope.go:117] "RemoveContainer" containerID="d6b80db46aee6e6c0d623048b017742bf66cb7a0562173d5ec24bf01a9bd0c0e" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.134890 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495540-vskp9"] Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.136374 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cd0ab2d0-bc28-4ced-a7a8-1bd939549e46" containerName="oc" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.136391 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0ab2d0-bc28-4ced-a7a8-1bd939549e46" containerName="oc" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.136507 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="cd0ab2d0-bc28-4ced-a7a8-1bd939549e46" containerName="oc" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.189703 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-vskp9"] Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.190095 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-vskp9" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.193099 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.193199 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.241983 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5z9\" (UniqueName: \"kubernetes.io/projected/93c4ecc8-1969-413f-bcd9-07ba11e53d0c-kube-api-access-pr5z9\") pod \"auto-csr-approver-29495540-vskp9\" (UID: \"93c4ecc8-1969-413f-bcd9-07ba11e53d0c\") " pod="openshift-infra/auto-csr-approver-29495540-vskp9" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.342946 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5z9\" (UniqueName: \"kubernetes.io/projected/93c4ecc8-1969-413f-bcd9-07ba11e53d0c-kube-api-access-pr5z9\") pod \"auto-csr-approver-29495540-vskp9\" (UID: \"93c4ecc8-1969-413f-bcd9-07ba11e53d0c\") " pod="openshift-infra/auto-csr-approver-29495540-vskp9" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.364121 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5z9\" (UniqueName: \"kubernetes.io/projected/93c4ecc8-1969-413f-bcd9-07ba11e53d0c-kube-api-access-pr5z9\") pod \"auto-csr-approver-29495540-vskp9\" (UID: \"93c4ecc8-1969-413f-bcd9-07ba11e53d0c\") " pod="openshift-infra/auto-csr-approver-29495540-vskp9" Jan 30 00:20:00 crc kubenswrapper[5117]: I0130 00:20:00.517008 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-vskp9" Jan 30 00:20:01 crc kubenswrapper[5117]: I0130 00:20:01.003463 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-vskp9"] Jan 30 00:20:01 crc kubenswrapper[5117]: I0130 00:20:01.407918 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-vskp9" event={"ID":"93c4ecc8-1969-413f-bcd9-07ba11e53d0c","Type":"ContainerStarted","Data":"cbab0f85403191c98248df89dc00bf48019ae94f673a715c3c44a743a74017ad"} Jan 30 00:20:02 crc kubenswrapper[5117]: I0130 00:20:02.415336 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-vskp9" event={"ID":"93c4ecc8-1969-413f-bcd9-07ba11e53d0c","Type":"ContainerStarted","Data":"51cc98933531c2f052fb0b9df8b8f898d3c8d27da7a0ecb0173330297645ae0a"} Jan 30 00:20:02 crc kubenswrapper[5117]: I0130 00:20:02.448175 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495540-vskp9" podStartSLOduration=1.5219349819999999 podStartE2EDuration="2.448141064s" podCreationTimestamp="2026-01-30 00:20:00 +0000 UTC" firstStartedPulling="2026-01-30 00:20:01.018928423 +0000 UTC m=+564.130464313" lastFinishedPulling="2026-01-30 00:20:01.945134505 +0000 UTC m=+565.056670395" observedRunningTime="2026-01-30 00:20:02.442398413 +0000 UTC m=+565.553934313" watchObservedRunningTime="2026-01-30 00:20:02.448141064 +0000 UTC m=+565.559676984" Jan 30 00:20:03 crc kubenswrapper[5117]: I0130 00:20:03.424998 5117 generic.go:358] "Generic (PLEG): container finished" podID="93c4ecc8-1969-413f-bcd9-07ba11e53d0c" containerID="51cc98933531c2f052fb0b9df8b8f898d3c8d27da7a0ecb0173330297645ae0a" exitCode=0 Jan 30 00:20:03 crc kubenswrapper[5117]: I0130 00:20:03.425123 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-vskp9" event={"ID":"93c4ecc8-1969-413f-bcd9-07ba11e53d0c","Type":"ContainerDied","Data":"51cc98933531c2f052fb0b9df8b8f898d3c8d27da7a0ecb0173330297645ae0a"} Jan 30 00:20:04 crc kubenswrapper[5117]: I0130 00:20:04.555400 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:20:04 crc kubenswrapper[5117]: I0130 00:20:04.555493 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:04 crc kubenswrapper[5117]: I0130 00:20:04.705210 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-vskp9" Jan 30 00:20:04 crc kubenswrapper[5117]: I0130 00:20:04.801792 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr5z9\" (UniqueName: \"kubernetes.io/projected/93c4ecc8-1969-413f-bcd9-07ba11e53d0c-kube-api-access-pr5z9\") pod \"93c4ecc8-1969-413f-bcd9-07ba11e53d0c\" (UID: \"93c4ecc8-1969-413f-bcd9-07ba11e53d0c\") " Jan 30 00:20:04 crc kubenswrapper[5117]: I0130 00:20:04.808858 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c4ecc8-1969-413f-bcd9-07ba11e53d0c-kube-api-access-pr5z9" (OuterVolumeSpecName: "kube-api-access-pr5z9") pod "93c4ecc8-1969-413f-bcd9-07ba11e53d0c" (UID: "93c4ecc8-1969-413f-bcd9-07ba11e53d0c"). InnerVolumeSpecName "kube-api-access-pr5z9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:04 crc kubenswrapper[5117]: I0130 00:20:04.902780 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pr5z9\" (UniqueName: \"kubernetes.io/projected/93c4ecc8-1969-413f-bcd9-07ba11e53d0c-kube-api-access-pr5z9\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:05 crc kubenswrapper[5117]: I0130 00:20:05.444337 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-vskp9" Jan 30 00:20:05 crc kubenswrapper[5117]: I0130 00:20:05.444378 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-vskp9" event={"ID":"93c4ecc8-1969-413f-bcd9-07ba11e53d0c","Type":"ContainerDied","Data":"cbab0f85403191c98248df89dc00bf48019ae94f673a715c3c44a743a74017ad"} Jan 30 00:20:05 crc kubenswrapper[5117]: I0130 00:20:05.444452 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbab0f85403191c98248df89dc00bf48019ae94f673a715c3c44a743a74017ad" Jan 30 00:20:34 crc kubenswrapper[5117]: I0130 00:20:34.555794 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:20:34 crc kubenswrapper[5117]: I0130 00:20:34.556303 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.574134 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg"] Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.574466 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="kube-rbac-proxy" containerID="cri-o://762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.574556 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="ovnkube-cluster-manager" containerID="cri-o://c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.762988 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.797025 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c"] Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798193 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="kube-rbac-proxy" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798220 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="kube-rbac-proxy" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798249 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="ovnkube-cluster-manager" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798258 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="ovnkube-cluster-manager" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798270 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93c4ecc8-1969-413f-bcd9-07ba11e53d0c" containerName="oc" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798276 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c4ecc8-1969-413f-bcd9-07ba11e53d0c" containerName="oc" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798397 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="ovnkube-cluster-manager" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798410 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="93c4ecc8-1969-413f-bcd9-07ba11e53d0c" containerName="oc" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.798427 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerName="kube-rbac-proxy" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.801983 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cdnjt"] Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802400 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-controller" containerID="cri-o://2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802412 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="nbdb" containerID="cri-o://4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802514 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-node" containerID="cri-o://4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802513 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802561 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="sbdb" containerID="cri-o://4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802549 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-acl-logging" containerID="cri-o://0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802794 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.802652 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="northd" containerID="cri-o://06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.860634 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-env-overrides\") pod \"ef32555a-37d0-4ff7-80d6-3d572916786f\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.860991 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef32555a-37d0-4ff7-80d6-3d572916786f-ovn-control-plane-metrics-cert\") pod \"ef32555a-37d0-4ff7-80d6-3d572916786f\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.861069 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hklp5\" (UniqueName: \"kubernetes.io/projected/ef32555a-37d0-4ff7-80d6-3d572916786f-kube-api-access-hklp5\") pod \"ef32555a-37d0-4ff7-80d6-3d572916786f\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.861113 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-ovnkube-config\") pod \"ef32555a-37d0-4ff7-80d6-3d572916786f\" (UID: \"ef32555a-37d0-4ff7-80d6-3d572916786f\") " Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.861376 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ef32555a-37d0-4ff7-80d6-3d572916786f" (UID: "ef32555a-37d0-4ff7-80d6-3d572916786f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.861536 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ef32555a-37d0-4ff7-80d6-3d572916786f" (UID: "ef32555a-37d0-4ff7-80d6-3d572916786f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.862784 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovnkube-controller" containerID="cri-o://064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" gracePeriod=30 Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.871014 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef32555a-37d0-4ff7-80d6-3d572916786f-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "ef32555a-37d0-4ff7-80d6-3d572916786f" (UID: "ef32555a-37d0-4ff7-80d6-3d572916786f"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.874605 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef32555a-37d0-4ff7-80d6-3d572916786f-kube-api-access-hklp5" (OuterVolumeSpecName: "kube-api-access-hklp5") pod "ef32555a-37d0-4ff7-80d6-3d572916786f" (UID: "ef32555a-37d0-4ff7-80d6-3d572916786f"). InnerVolumeSpecName "kube-api-access-hklp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962456 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df09b42f-7426-4395-8f42-2f7da2217bb9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962531 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d49n\" (UniqueName: \"kubernetes.io/projected/df09b42f-7426-4395-8f42-2f7da2217bb9-kube-api-access-5d49n\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962563 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df09b42f-7426-4395-8f42-2f7da2217bb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962591 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df09b42f-7426-4395-8f42-2f7da2217bb9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962635 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962652 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef32555a-37d0-4ff7-80d6-3d572916786f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962733 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef32555a-37d0-4ff7-80d6-3d572916786f-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:35 crc kubenswrapper[5117]: I0130 00:20:35.962760 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hklp5\" (UniqueName: \"kubernetes.io/projected/ef32555a-37d0-4ff7-80d6-3d572916786f-kube-api-access-hklp5\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.063829 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5d49n\" (UniqueName: \"kubernetes.io/projected/df09b42f-7426-4395-8f42-2f7da2217bb9-kube-api-access-5d49n\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.063875 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df09b42f-7426-4395-8f42-2f7da2217bb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.063901 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df09b42f-7426-4395-8f42-2f7da2217bb9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.063933 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df09b42f-7426-4395-8f42-2f7da2217bb9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.064626 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df09b42f-7426-4395-8f42-2f7da2217bb9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.064803 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df09b42f-7426-4395-8f42-2f7da2217bb9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.069249 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df09b42f-7426-4395-8f42-2f7da2217bb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.087200 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d49n\" (UniqueName: \"kubernetes.io/projected/df09b42f-7426-4395-8f42-2f7da2217bb9-kube-api-access-5d49n\") pod \"ovnkube-control-plane-97c9b6c48-h7b4c\" (UID: \"df09b42f-7426-4395-8f42-2f7da2217bb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.161782 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cdnjt_ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0/ovn-acl-logging/0.log" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.162453 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cdnjt_ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0/ovn-controller/0.log" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.163135 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.174893 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.228220 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rvwl6"] Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.228949 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kubecfg-setup" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.228972 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kubecfg-setup" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229001 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229011 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229036 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="nbdb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229044 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="nbdb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229079 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-acl-logging" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229088 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-acl-logging" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229097 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="sbdb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229105 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="sbdb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229115 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="northd" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229121 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="northd" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229129 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-node" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229137 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-node" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229164 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovnkube-controller" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229172 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovnkube-controller" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229184 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-controller" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229191 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-controller" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229380 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-controller" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229416 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229426 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="northd" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229436 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovnkube-controller" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229446 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="nbdb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229454 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="sbdb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229462 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="ovn-acl-logging" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.229471 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerName="kube-rbac-proxy-node" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.237997 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265091 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovn-node-metrics-cert\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265156 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-env-overrides\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265395 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-var-lib-openvswitch\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265482 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-netd\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265538 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-node-log\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265619 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-slash\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265726 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-script-lib\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265897 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-bin\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.265955 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-systemd-units\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266038 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-openvswitch\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266467 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-config\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266514 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-ovn\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266581 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-etc-openvswitch\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266627 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-node-log" (OuterVolumeSpecName: "node-log") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266646 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266670 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266709 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-slash" (OuterVolumeSpecName: "host-slash") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.266806 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267347 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267386 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-ovn-kubernetes\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267415 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-log-socket\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267469 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpvmf\" (UniqueName: \"kubernetes.io/projected/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-kube-api-access-rpvmf\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267484 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-kubelet\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267499 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-systemd\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267544 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-netns\") pod \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\" (UID: \"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0\") " Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267579 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267683 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267758 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267794 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267810 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267864 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267903 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-log-socket" (OuterVolumeSpecName: "log-socket") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.267957 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268004 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268094 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268307 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268342 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268374 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268411 5117 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268434 5117 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268455 5117 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268476 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268499 5117 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268521 5117 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268544 5117 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268568 5117 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268591 5117 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268611 5117 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268631 5117 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268652 5117 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268673 5117 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.268720 5117 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.276819 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.284760 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-kube-api-access-rpvmf" (OuterVolumeSpecName: "kube-api-access-rpvmf") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "kube-api-access-rpvmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.295082 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" (UID: "ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370368 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-run-netns\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370426 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-cni-bin\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370457 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-env-overrides\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370516 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-kubelet\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370574 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-node-log\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370604 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370649 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-cni-netd\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370714 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75b791a0-e01e-471f-9033-170548aebe3a-ovn-node-metrics-cert\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370770 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-etc-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370813 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-log-socket\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370850 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-ovnkube-config\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370890 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-var-lib-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370929 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-systemd\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.370975 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-ovn\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371023 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5b9p\" (UniqueName: \"kubernetes.io/projected/75b791a0-e01e-471f-9033-170548aebe3a-kube-api-access-j5b9p\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371061 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-slash\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371100 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-ovnkube-script-lib\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371148 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-systemd-units\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371179 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371250 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371323 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpvmf\" (UniqueName: \"kubernetes.io/projected/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-kube-api-access-rpvmf\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371346 5117 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371359 5117 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371373 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.371395 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472298 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-var-lib-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472382 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-var-lib-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472447 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-systemd\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472508 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-systemd\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472556 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-ovn\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472620 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-ovn\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472662 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5b9p\" (UniqueName: \"kubernetes.io/projected/75b791a0-e01e-471f-9033-170548aebe3a-kube-api-access-j5b9p\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472892 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-slash\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.472994 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-slash\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.473048 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-ovnkube-script-lib\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.473266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-systemd-units\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.473798 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-systemd-units\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.473892 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.473963 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474003 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-run-netns\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474031 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-cni-bin\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474060 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-env-overrides\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474068 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474121 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-run-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474205 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-ovnkube-script-lib\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474216 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-cni-bin\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474245 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-kubelet\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474209 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-run-netns\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474309 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-kubelet\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474385 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-node-log\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474412 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474467 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-cni-netd\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474497 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75b791a0-e01e-471f-9033-170548aebe3a-ovn-node-metrics-cert\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474499 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-node-log\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474559 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-etc-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474579 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-cni-netd\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474570 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474621 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-log-socket\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474657 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-ovnkube-config\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474670 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-etc-openvswitch\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474710 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75b791a0-e01e-471f-9033-170548aebe3a-log-socket\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.474735 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-env-overrides\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.475902 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75b791a0-e01e-471f-9033-170548aebe3a-ovnkube-config\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.480301 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75b791a0-e01e-471f-9033-170548aebe3a-ovn-node-metrics-cert\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.492982 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5b9p\" (UniqueName: \"kubernetes.io/projected/75b791a0-e01e-471f-9033-170548aebe3a-kube-api-access-j5b9p\") pod \"ovnkube-node-rvwl6\" (UID: \"75b791a0-e01e-471f-9033-170548aebe3a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.562488 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:36 crc kubenswrapper[5117]: W0130 00:20:36.588572 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75b791a0_e01e_471f_9033_170548aebe3a.slice/crio-da95d2ac325773e5c55d544ed6e45ca61fe8a50cdc4a1eddd8f291728e85474d WatchSource:0}: Error finding container da95d2ac325773e5c55d544ed6e45ca61fe8a50cdc4a1eddd8f291728e85474d: Status 404 returned error can't find the container with id da95d2ac325773e5c55d544ed6e45ca61fe8a50cdc4a1eddd8f291728e85474d Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.682923 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cdnjt_ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0/ovn-acl-logging/0.log" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.683489 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cdnjt_ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0/ovn-controller/0.log" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684000 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684030 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684043 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684054 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684065 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684121 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684135 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" exitCode=143 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684146 5117 generic.go:358] "Generic (PLEG): container finished" podID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" containerID="2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" exitCode=143 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684477 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684594 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684675 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684743 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684770 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684822 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684847 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684863 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.685709 5117 scope.go:117] "RemoveContainer" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.684876 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688788 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688807 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688822 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688832 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688839 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688846 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688853 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688859 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688866 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688872 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688879 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688889 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688899 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688909 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688940 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688950 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688958 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688965 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688973 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688979 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.688988 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689001 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cdnjt" event={"ID":"ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0","Type":"ContainerDied","Data":"a79a67d4dd15044e0af5558f93d1a71d4610b083ab05d901ec4d411b19f5dea2"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689014 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689024 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689033 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689042 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689050 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689057 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689064 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689070 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.689077 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.693830 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"2a0d4dc739b248ac907c19ad8d711f583f33e67281b9cce40331d586f910344e"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.693863 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"da95d2ac325773e5c55d544ed6e45ca61fe8a50cdc4a1eddd8f291728e85474d"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696477 5117 generic.go:358] "Generic (PLEG): container finished" podID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerID="c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696502 5117 generic.go:358] "Generic (PLEG): container finished" podID="ef32555a-37d0-4ff7-80d6-3d572916786f" containerID="762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536" exitCode=0 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696605 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" event={"ID":"ef32555a-37d0-4ff7-80d6-3d572916786f","Type":"ContainerDied","Data":"c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696667 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696749 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696811 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" event={"ID":"ef32555a-37d0-4ff7-80d6-3d572916786f","Type":"ContainerDied","Data":"762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696872 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696946 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696999 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" event={"ID":"ef32555a-37d0-4ff7-80d6-3d572916786f","Type":"ContainerDied","Data":"f27688a46cecb1d0f451bdf3dfd38c9159e089f29d78b4b499102d79d1d9e088"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.697055 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.697105 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.696711 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.698681 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sdjgw_c0ccdffb-2e23-428a-8423-b08f9d708b15/kube-multus/0.log" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.698754 5117 generic.go:358] "Generic (PLEG): container finished" podID="c0ccdffb-2e23-428a-8423-b08f9d708b15" containerID="a8bcd34e890bf8baff2160ccc56d1efb92d9851face19b27f5725766ed4a4092" exitCode=2 Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.698864 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sdjgw" event={"ID":"c0ccdffb-2e23-428a-8423-b08f9d708b15","Type":"ContainerDied","Data":"a8bcd34e890bf8baff2160ccc56d1efb92d9851face19b27f5725766ed4a4092"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.699361 5117 scope.go:117] "RemoveContainer" containerID="a8bcd34e890bf8baff2160ccc56d1efb92d9851face19b27f5725766ed4a4092" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.709017 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" event={"ID":"df09b42f-7426-4395-8f42-2f7da2217bb9","Type":"ContainerStarted","Data":"17f094fc5eeeba686f9ec055ec316b96938d7c62ab081761f5f93a8baf11960f"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.709067 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" event={"ID":"df09b42f-7426-4395-8f42-2f7da2217bb9","Type":"ContainerStarted","Data":"efb931ee4d75593ae59160000afaa09ba0e9f61a8a476cf628d519104e9dfb9e"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.709085 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" event={"ID":"df09b42f-7426-4395-8f42-2f7da2217bb9","Type":"ContainerStarted","Data":"b5c5938928bd76b429fab9b53f7ccd793a7859345eb7beaefcf7d5e51c25ae05"} Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.785835 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-h7b4c" podStartSLOduration=1.7858114600000001 podStartE2EDuration="1.78581146s" podCreationTimestamp="2026-01-30 00:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:20:36.779922145 +0000 UTC m=+599.891458075" watchObservedRunningTime="2026-01-30 00:20:36.78581146 +0000 UTC m=+599.897347370" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.812305 5117 scope.go:117] "RemoveContainer" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.827720 5117 scope.go:117] "RemoveContainer" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.833551 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg"] Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.841173 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-vlmjg"] Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.850424 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cdnjt"] Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.852424 5117 scope.go:117] "RemoveContainer" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.854189 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cdnjt"] Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.876294 5117 scope.go:117] "RemoveContainer" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.887699 5117 scope.go:117] "RemoveContainer" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.901564 5117 scope.go:117] "RemoveContainer" containerID="0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.915327 5117 scope.go:117] "RemoveContainer" containerID="2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.938101 5117 scope.go:117] "RemoveContainer" containerID="97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.962851 5117 scope.go:117] "RemoveContainer" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.972290 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": container with ID starting with 064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e not found: ID does not exist" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.972340 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} err="failed to get container status \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": rpc error: code = NotFound desc = could not find container \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": container with ID starting with 064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.972374 5117 scope.go:117] "RemoveContainer" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.972683 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": container with ID starting with 4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905 not found: ID does not exist" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.972838 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} err="failed to get container status \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": rpc error: code = NotFound desc = could not find container \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": container with ID starting with 4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.972869 5117 scope.go:117] "RemoveContainer" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.973080 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": container with ID starting with 4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192 not found: ID does not exist" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973098 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} err="failed to get container status \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": rpc error: code = NotFound desc = could not find container \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": container with ID starting with 4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973126 5117 scope.go:117] "RemoveContainer" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.973311 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": container with ID starting with 06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae not found: ID does not exist" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973341 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} err="failed to get container status \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": rpc error: code = NotFound desc = could not find container \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": container with ID starting with 06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973353 5117 scope.go:117] "RemoveContainer" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.973526 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": container with ID starting with 77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb not found: ID does not exist" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973554 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} err="failed to get container status \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": rpc error: code = NotFound desc = could not find container \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": container with ID starting with 77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973565 5117 scope.go:117] "RemoveContainer" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.973773 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": container with ID starting with 4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de not found: ID does not exist" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973790 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} err="failed to get container status \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": rpc error: code = NotFound desc = could not find container \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": container with ID starting with 4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.973804 5117 scope.go:117] "RemoveContainer" containerID="0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.973998 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": container with ID starting with 0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288 not found: ID does not exist" containerID="0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.974023 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} err="failed to get container status \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": rpc error: code = NotFound desc = could not find container \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": container with ID starting with 0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.974039 5117 scope.go:117] "RemoveContainer" containerID="2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.974385 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": container with ID starting with 2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de not found: ID does not exist" containerID="2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.974410 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} err="failed to get container status \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": rpc error: code = NotFound desc = could not find container \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": container with ID starting with 2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.974427 5117 scope.go:117] "RemoveContainer" containerID="97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c" Jan 30 00:20:36 crc kubenswrapper[5117]: E0130 00:20:36.974810 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": container with ID starting with 97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c not found: ID does not exist" containerID="97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.974826 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} err="failed to get container status \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": rpc error: code = NotFound desc = could not find container \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": container with ID starting with 97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.974837 5117 scope.go:117] "RemoveContainer" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975056 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} err="failed to get container status \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": rpc error: code = NotFound desc = could not find container \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": container with ID starting with 064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975069 5117 scope.go:117] "RemoveContainer" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975364 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} err="failed to get container status \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": rpc error: code = NotFound desc = could not find container \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": container with ID starting with 4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975377 5117 scope.go:117] "RemoveContainer" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975578 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} err="failed to get container status \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": rpc error: code = NotFound desc = could not find container \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": container with ID starting with 4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975596 5117 scope.go:117] "RemoveContainer" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975965 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} err="failed to get container status \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": rpc error: code = NotFound desc = could not find container \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": container with ID starting with 06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.975981 5117 scope.go:117] "RemoveContainer" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.976213 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} err="failed to get container status \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": rpc error: code = NotFound desc = could not find container \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": container with ID starting with 77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.976230 5117 scope.go:117] "RemoveContainer" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.976494 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} err="failed to get container status \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": rpc error: code = NotFound desc = could not find container \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": container with ID starting with 4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.976513 5117 scope.go:117] "RemoveContainer" containerID="0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.976791 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} err="failed to get container status \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": rpc error: code = NotFound desc = could not find container \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": container with ID starting with 0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.976814 5117 scope.go:117] "RemoveContainer" containerID="2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.977775 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} err="failed to get container status \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": rpc error: code = NotFound desc = could not find container \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": container with ID starting with 2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.977816 5117 scope.go:117] "RemoveContainer" containerID="97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978228 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} err="failed to get container status \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": rpc error: code = NotFound desc = could not find container \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": container with ID starting with 97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978253 5117 scope.go:117] "RemoveContainer" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978525 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} err="failed to get container status \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": rpc error: code = NotFound desc = could not find container \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": container with ID starting with 064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978543 5117 scope.go:117] "RemoveContainer" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978732 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} err="failed to get container status \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": rpc error: code = NotFound desc = could not find container \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": container with ID starting with 4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978771 5117 scope.go:117] "RemoveContainer" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978921 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} err="failed to get container status \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": rpc error: code = NotFound desc = could not find container \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": container with ID starting with 4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.978935 5117 scope.go:117] "RemoveContainer" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979075 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} err="failed to get container status \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": rpc error: code = NotFound desc = could not find container \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": container with ID starting with 06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979089 5117 scope.go:117] "RemoveContainer" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979215 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} err="failed to get container status \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": rpc error: code = NotFound desc = could not find container \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": container with ID starting with 77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979230 5117 scope.go:117] "RemoveContainer" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979375 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} err="failed to get container status \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": rpc error: code = NotFound desc = could not find container \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": container with ID starting with 4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979390 5117 scope.go:117] "RemoveContainer" containerID="0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979521 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} err="failed to get container status \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": rpc error: code = NotFound desc = could not find container \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": container with ID starting with 0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979536 5117 scope.go:117] "RemoveContainer" containerID="2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979672 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} err="failed to get container status \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": rpc error: code = NotFound desc = could not find container \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": container with ID starting with 2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979698 5117 scope.go:117] "RemoveContainer" containerID="97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979910 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} err="failed to get container status \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": rpc error: code = NotFound desc = could not find container \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": container with ID starting with 97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.979927 5117 scope.go:117] "RemoveContainer" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980132 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} err="failed to get container status \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": rpc error: code = NotFound desc = could not find container \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": container with ID starting with 064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980182 5117 scope.go:117] "RemoveContainer" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980377 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} err="failed to get container status \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": rpc error: code = NotFound desc = could not find container \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": container with ID starting with 4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980393 5117 scope.go:117] "RemoveContainer" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980588 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} err="failed to get container status \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": rpc error: code = NotFound desc = could not find container \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": container with ID starting with 4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980606 5117 scope.go:117] "RemoveContainer" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980848 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} err="failed to get container status \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": rpc error: code = NotFound desc = could not find container \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": container with ID starting with 06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.980887 5117 scope.go:117] "RemoveContainer" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.981131 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} err="failed to get container status \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": rpc error: code = NotFound desc = could not find container \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": container with ID starting with 77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.981146 5117 scope.go:117] "RemoveContainer" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.981896 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} err="failed to get container status \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": rpc error: code = NotFound desc = could not find container \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": container with ID starting with 4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.981938 5117 scope.go:117] "RemoveContainer" containerID="0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.982185 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288"} err="failed to get container status \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": rpc error: code = NotFound desc = could not find container \"0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288\": container with ID starting with 0300bec43e8841c01d8f6b44b86afc610eb3dd4736a0d464e10167a8e2b04288 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.982210 5117 scope.go:117] "RemoveContainer" containerID="2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.982467 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de"} err="failed to get container status \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": rpc error: code = NotFound desc = could not find container \"2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de\": container with ID starting with 2271383b0ff3af94c9ca40e2359d46b25d4c07e118ebab55c4962ad1e84d09de not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.982499 5117 scope.go:117] "RemoveContainer" containerID="97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.982708 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c"} err="failed to get container status \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": rpc error: code = NotFound desc = could not find container \"97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c\": container with ID starting with 97c593f89f438806dd887efcd430c9ab24c7d6e7a15851c526cb60da0c37670c not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.982726 5117 scope.go:117] "RemoveContainer" containerID="064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983077 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e"} err="failed to get container status \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": rpc error: code = NotFound desc = could not find container \"064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e\": container with ID starting with 064db69c77db80ee6ba9ee906dde2bf98bd33becc67d986d5b7f6eb97884322e not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983096 5117 scope.go:117] "RemoveContainer" containerID="4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983333 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905"} err="failed to get container status \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": rpc error: code = NotFound desc = could not find container \"4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905\": container with ID starting with 4fcb15437adc0eaecc78905149242326b98a25be77d77876c9e314058f6db905 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983350 5117 scope.go:117] "RemoveContainer" containerID="4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983529 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192"} err="failed to get container status \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": rpc error: code = NotFound desc = could not find container \"4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192\": container with ID starting with 4aa090c8694774924bd469a9d150920034e8b77cbb660d17b53014ad37c21192 not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983546 5117 scope.go:117] "RemoveContainer" containerID="06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983758 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae"} err="failed to get container status \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": rpc error: code = NotFound desc = could not find container \"06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae\": container with ID starting with 06182785f4e38512ddf52340921381a243bf56a4f1c046e7f42099debcfdc1ae not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983775 5117 scope.go:117] "RemoveContainer" containerID="77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983976 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb"} err="failed to get container status \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": rpc error: code = NotFound desc = could not find container \"77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb\": container with ID starting with 77f9fa339f251a1076b7f1d1953f56ee36f1df312d05391737195f7cebaf19eb not found: ID does not exist" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.983991 5117 scope.go:117] "RemoveContainer" containerID="4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de" Jan 30 00:20:36 crc kubenswrapper[5117]: I0130 00:20:36.984169 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de"} err="failed to get container status \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": rpc error: code = NotFound desc = could not find container \"4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de\": container with ID starting with 4793fa2aedbaee5c1383fbffc8f836d92239cf4661546cad65f70089520e20de not found: ID does not exist" Jan 30 00:20:37 crc kubenswrapper[5117]: I0130 00:20:37.043469 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0" path="/var/lib/kubelet/pods/ae50f46f-8c30-46ce-91a1-9e2ce73d4fe0/volumes" Jan 30 00:20:37 crc kubenswrapper[5117]: I0130 00:20:37.045032 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef32555a-37d0-4ff7-80d6-3d572916786f" path="/var/lib/kubelet/pods/ef32555a-37d0-4ff7-80d6-3d572916786f/volumes" Jan 30 00:20:37 crc kubenswrapper[5117]: I0130 00:20:37.715573 5117 generic.go:358] "Generic (PLEG): container finished" podID="75b791a0-e01e-471f-9033-170548aebe3a" containerID="2a0d4dc739b248ac907c19ad8d711f583f33e67281b9cce40331d586f910344e" exitCode=0 Jan 30 00:20:37 crc kubenswrapper[5117]: I0130 00:20:37.715652 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerDied","Data":"2a0d4dc739b248ac907c19ad8d711f583f33e67281b9cce40331d586f910344e"} Jan 30 00:20:37 crc kubenswrapper[5117]: I0130 00:20:37.718923 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sdjgw_c0ccdffb-2e23-428a-8423-b08f9d708b15/kube-multus/0.log" Jan 30 00:20:37 crc kubenswrapper[5117]: I0130 00:20:37.719080 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sdjgw" event={"ID":"c0ccdffb-2e23-428a-8423-b08f9d708b15","Type":"ContainerStarted","Data":"d7531561ed83ba9700935949f230fe7c62b56a2c903123a0030c6f795c434ccb"} Jan 30 00:20:38 crc kubenswrapper[5117]: I0130 00:20:38.735575 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"c7079a58d5f81731165e6ed21d87b145a7695ee48643f6b6489bf125b50dd8c3"} Jan 30 00:20:38 crc kubenswrapper[5117]: I0130 00:20:38.736394 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"9ee37a033a3c22b419dd41797fe21677bb9361c3eaabdd226fb9081908b7b07d"} Jan 30 00:20:38 crc kubenswrapper[5117]: I0130 00:20:38.736420 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"ed17dd7529dd61e30a5456377007f223b4e164c59e5167b3e73a8cd6f014a17f"} Jan 30 00:20:38 crc kubenswrapper[5117]: I0130 00:20:38.736452 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"0c3115a9e89f08c37de49b8c16536cd1ad7918edf7f8ec2fc6ee968d75a683da"} Jan 30 00:20:38 crc kubenswrapper[5117]: I0130 00:20:38.736473 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"67396aed9c618f49617f3fe21b4a9ebda4e8f52ff2e4eb45bfeb49c2a3dc626d"} Jan 30 00:20:38 crc kubenswrapper[5117]: I0130 00:20:38.736492 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"755f8018115b54b16f761f430943c77d7332265157df60194f7db376ad454ebd"} Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.251988 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.257643 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.293119 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sdjgw_c0ccdffb-2e23-428a-8423-b08f9d708b15/kube-multus/0.log" Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.295784 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sdjgw_c0ccdffb-2e23-428a-8423-b08f9d708b15/kube-multus/0.log" Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.301415 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.303602 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.487037 5117 scope.go:117] "RemoveContainer" containerID="762e82c3873eda655c95fac58f27da06a1b0d4fd47858d1e48bbe5871c068536" Jan 30 00:20:39 crc kubenswrapper[5117]: I0130 00:20:39.511604 5117 scope.go:117] "RemoveContainer" containerID="c55611aaea5d428b9efbd42278b3cb5813af341983cc829ed883f927f7f8810c" Jan 30 00:20:40 crc kubenswrapper[5117]: I0130 00:20:40.751674 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"a8603debabaec6aca82a4640b314f40ba7c4758db0114e3364875f710ae12b21"} Jan 30 00:20:43 crc kubenswrapper[5117]: I0130 00:20:43.780064 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" event={"ID":"75b791a0-e01e-471f-9033-170548aebe3a","Type":"ContainerStarted","Data":"3243cfc4f68489f00d6a1a40fd4b93e0a7ddc9c0b2fe2594408617b8962dcfac"} Jan 30 00:20:43 crc kubenswrapper[5117]: I0130 00:20:43.780637 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:43 crc kubenswrapper[5117]: I0130 00:20:43.780651 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:43 crc kubenswrapper[5117]: I0130 00:20:43.780668 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:43 crc kubenswrapper[5117]: I0130 00:20:43.812019 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:43 crc kubenswrapper[5117]: I0130 00:20:43.812553 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:20:43 crc kubenswrapper[5117]: I0130 00:20:43.825926 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" podStartSLOduration=7.8259113970000005 podStartE2EDuration="7.825911397s" podCreationTimestamp="2026-01-30 00:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:20:43.82172001 +0000 UTC m=+606.933255900" watchObservedRunningTime="2026-01-30 00:20:43.825911397 +0000 UTC m=+606.937447287" Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.555225 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.556921 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.557046 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.558275 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fa20a680f842b91be2f212674ae09218d15dca3e62b236ca705f6ad0d0dc01e"} pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.558419 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" containerID="cri-o://8fa20a680f842b91be2f212674ae09218d15dca3e62b236ca705f6ad0d0dc01e" gracePeriod=600 Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.917186 5117 generic.go:358] "Generic (PLEG): container finished" podID="3965caad-c581-45b3-88e0-99b4039659c5" containerID="8fa20a680f842b91be2f212674ae09218d15dca3e62b236ca705f6ad0d0dc01e" exitCode=0 Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.917269 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerDied","Data":"8fa20a680f842b91be2f212674ae09218d15dca3e62b236ca705f6ad0d0dc01e"} Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.917842 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"073defdd7077d53303dbf34291dabf3d999fa2598157f63385c19d2858c64243"} Jan 30 00:21:04 crc kubenswrapper[5117]: I0130 00:21:04.917883 5117 scope.go:117] "RemoveContainer" containerID="a05881f5d76b5732730f0a57f59c72e0cd420789c5088e30351393724d83be5f" Jan 30 00:21:15 crc kubenswrapper[5117]: I0130 00:21:15.820177 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rvwl6" Jan 30 00:21:39 crc kubenswrapper[5117]: I0130 00:21:39.827931 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cq9w"] Jan 30 00:21:39 crc kubenswrapper[5117]: I0130 00:21:39.830606 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6cq9w" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="registry-server" containerID="cri-o://fe79c5f944b4aab8eafddc127eb78f7a95d82020cbe8d91658a5e7592ab7a7a6" gracePeriod=30 Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.152456 5117 generic.go:358] "Generic (PLEG): container finished" podID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerID="fe79c5f944b4aab8eafddc127eb78f7a95d82020cbe8d91658a5e7592ab7a7a6" exitCode=0 Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.152546 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cq9w" event={"ID":"d2e964cb-3a46-4bbc-823f-43ad384d844c","Type":"ContainerDied","Data":"fe79c5f944b4aab8eafddc127eb78f7a95d82020cbe8d91658a5e7592ab7a7a6"} Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.215520 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.341564 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jdzc\" (UniqueName: \"kubernetes.io/projected/d2e964cb-3a46-4bbc-823f-43ad384d844c-kube-api-access-5jdzc\") pod \"d2e964cb-3a46-4bbc-823f-43ad384d844c\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.341712 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-utilities\") pod \"d2e964cb-3a46-4bbc-823f-43ad384d844c\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.341778 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-catalog-content\") pod \"d2e964cb-3a46-4bbc-823f-43ad384d844c\" (UID: \"d2e964cb-3a46-4bbc-823f-43ad384d844c\") " Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.344373 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-utilities" (OuterVolumeSpecName: "utilities") pod "d2e964cb-3a46-4bbc-823f-43ad384d844c" (UID: "d2e964cb-3a46-4bbc-823f-43ad384d844c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.348654 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e964cb-3a46-4bbc-823f-43ad384d844c-kube-api-access-5jdzc" (OuterVolumeSpecName: "kube-api-access-5jdzc") pod "d2e964cb-3a46-4bbc-823f-43ad384d844c" (UID: "d2e964cb-3a46-4bbc-823f-43ad384d844c"). InnerVolumeSpecName "kube-api-access-5jdzc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.360224 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2e964cb-3a46-4bbc-823f-43ad384d844c" (UID: "d2e964cb-3a46-4bbc-823f-43ad384d844c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.443059 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.443094 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e964cb-3a46-4bbc-823f-43ad384d844c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:40 crc kubenswrapper[5117]: I0130 00:21:40.443107 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5jdzc\" (UniqueName: \"kubernetes.io/projected/d2e964cb-3a46-4bbc-823f-43ad384d844c-kube-api-access-5jdzc\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:41 crc kubenswrapper[5117]: I0130 00:21:41.162294 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cq9w" event={"ID":"d2e964cb-3a46-4bbc-823f-43ad384d844c","Type":"ContainerDied","Data":"caf4520087fe10f071c949a918fcc22a0c8bfa8a79259157f120bbef4db12b5b"} Jan 30 00:21:41 crc kubenswrapper[5117]: I0130 00:21:41.162357 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cq9w" Jan 30 00:21:41 crc kubenswrapper[5117]: I0130 00:21:41.162391 5117 scope.go:117] "RemoveContainer" containerID="fe79c5f944b4aab8eafddc127eb78f7a95d82020cbe8d91658a5e7592ab7a7a6" Jan 30 00:21:41 crc kubenswrapper[5117]: I0130 00:21:41.188916 5117 scope.go:117] "RemoveContainer" containerID="bd60b939c84782959ee32d09c79ed496c3aa1886125d8e8a48b88fb357df0855" Jan 30 00:21:41 crc kubenswrapper[5117]: I0130 00:21:41.190223 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cq9w"] Jan 30 00:21:41 crc kubenswrapper[5117]: I0130 00:21:41.199654 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cq9w"] Jan 30 00:21:41 crc kubenswrapper[5117]: I0130 00:21:41.212797 5117 scope.go:117] "RemoveContainer" containerID="a68285855e082e326befd33b629a79839b8072fd4f675a8806fd748b417ad06e" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.046486 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" path="/var/lib/kubelet/pods/d2e964cb-3a46-4bbc-823f-43ad384d844c/volumes" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.525147 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr"] Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.526154 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="extract-utilities" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.526190 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="extract-utilities" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.526241 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="extract-content" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.526255 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="extract-content" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.526298 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="registry-server" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.526313 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="registry-server" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.526465 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2e964cb-3a46-4bbc-823f-43ad384d844c" containerName="registry-server" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.547566 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr"] Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.547956 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.551632 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.685950 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.686291 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.686443 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbs6z\" (UniqueName: \"kubernetes.io/projected/a0ee5f51-4db4-4713-bd3a-850996fcb555-kube-api-access-tbs6z\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.787806 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tbs6z\" (UniqueName: \"kubernetes.io/projected/a0ee5f51-4db4-4713-bd3a-850996fcb555-kube-api-access-tbs6z\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.787927 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.787988 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.788371 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.788525 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.810512 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbs6z\" (UniqueName: \"kubernetes.io/projected/a0ee5f51-4db4-4713-bd3a-850996fcb555-kube-api-access-tbs6z\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:43 crc kubenswrapper[5117]: I0130 00:21:43.895116 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:44 crc kubenswrapper[5117]: I0130 00:21:44.136024 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr"] Jan 30 00:21:44 crc kubenswrapper[5117]: I0130 00:21:44.147852 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:21:44 crc kubenswrapper[5117]: I0130 00:21:44.186879 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" event={"ID":"a0ee5f51-4db4-4713-bd3a-850996fcb555","Type":"ContainerStarted","Data":"5af00aaa8e49ed571a0d4f2830ca2e72b46b059aeeef9953c3f2b5aa9765f187"} Jan 30 00:21:45 crc kubenswrapper[5117]: I0130 00:21:45.197180 5117 generic.go:358] "Generic (PLEG): container finished" podID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerID="1c3f9a0fc56d833c3935ff9923943af3fd0889d522b69f01af20440d43eaa923" exitCode=0 Jan 30 00:21:45 crc kubenswrapper[5117]: I0130 00:21:45.197261 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" event={"ID":"a0ee5f51-4db4-4713-bd3a-850996fcb555","Type":"ContainerDied","Data":"1c3f9a0fc56d833c3935ff9923943af3fd0889d522b69f01af20440d43eaa923"} Jan 30 00:21:47 crc kubenswrapper[5117]: I0130 00:21:47.218523 5117 generic.go:358] "Generic (PLEG): container finished" podID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerID="b1757af8ecbdca7b7e78526387afaa04a38e4c9a98a34251a02f4a1055bc7fe5" exitCode=0 Jan 30 00:21:47 crc kubenswrapper[5117]: I0130 00:21:47.218605 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" event={"ID":"a0ee5f51-4db4-4713-bd3a-850996fcb555","Type":"ContainerDied","Data":"b1757af8ecbdca7b7e78526387afaa04a38e4c9a98a34251a02f4a1055bc7fe5"} Jan 30 00:21:48 crc kubenswrapper[5117]: I0130 00:21:48.229995 5117 generic.go:358] "Generic (PLEG): container finished" podID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerID="721fd0702395cac6709d1cf81da70b13dd6acb78a03b3c049955d3191349e4a2" exitCode=0 Jan 30 00:21:48 crc kubenswrapper[5117]: I0130 00:21:48.230129 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" event={"ID":"a0ee5f51-4db4-4713-bd3a-850996fcb555","Type":"ContainerDied","Data":"721fd0702395cac6709d1cf81da70b13dd6acb78a03b3c049955d3191349e4a2"} Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.501144 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h"] Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.507962 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.513873 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h"] Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.574543 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.674640 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-bundle\") pod \"a0ee5f51-4db4-4713-bd3a-850996fcb555\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.674818 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbs6z\" (UniqueName: \"kubernetes.io/projected/a0ee5f51-4db4-4713-bd3a-850996fcb555-kube-api-access-tbs6z\") pod \"a0ee5f51-4db4-4713-bd3a-850996fcb555\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.674881 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-util\") pod \"a0ee5f51-4db4-4713-bd3a-850996fcb555\" (UID: \"a0ee5f51-4db4-4713-bd3a-850996fcb555\") " Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.675103 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlg7\" (UniqueName: \"kubernetes.io/projected/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-kube-api-access-2nlg7\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.675195 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.675319 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.677248 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-bundle" (OuterVolumeSpecName: "bundle") pod "a0ee5f51-4db4-4713-bd3a-850996fcb555" (UID: "a0ee5f51-4db4-4713-bd3a-850996fcb555"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.684784 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ee5f51-4db4-4713-bd3a-850996fcb555-kube-api-access-tbs6z" (OuterVolumeSpecName: "kube-api-access-tbs6z") pod "a0ee5f51-4db4-4713-bd3a-850996fcb555" (UID: "a0ee5f51-4db4-4713-bd3a-850996fcb555"). InnerVolumeSpecName "kube-api-access-tbs6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.698090 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-util" (OuterVolumeSpecName: "util") pod "a0ee5f51-4db4-4713-bd3a-850996fcb555" (UID: "a0ee5f51-4db4-4713-bd3a-850996fcb555"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.776651 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.776733 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlg7\" (UniqueName: \"kubernetes.io/projected/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-kube-api-access-2nlg7\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.776771 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.776837 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tbs6z\" (UniqueName: \"kubernetes.io/projected/a0ee5f51-4db4-4713-bd3a-850996fcb555-kube-api-access-tbs6z\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.776848 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.777476 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.777517 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0ee5f51-4db4-4713-bd3a-850996fcb555-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.777461 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.797681 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlg7\" (UniqueName: \"kubernetes.io/projected/e0791d08-fb28-4fed-9fc1-f4a1c7d8c077-kube-api-access-2nlg7\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h\" (UID: \"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:49 crc kubenswrapper[5117]: I0130 00:21:49.829346 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.106963 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h"] Jan 30 00:21:50 crc kubenswrapper[5117]: W0130 00:21:50.111567 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0791d08_fb28_4fed_9fc1_f4a1c7d8c077.slice/crio-5d2d9f0091688b69a6595fe5a2bb73523a4cff2d40b5b93d3474ba4710669f62 WatchSource:0}: Error finding container 5d2d9f0091688b69a6595fe5a2bb73523a4cff2d40b5b93d3474ba4710669f62: Status 404 returned error can't find the container with id 5d2d9f0091688b69a6595fe5a2bb73523a4cff2d40b5b93d3474ba4710669f62 Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.243579 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" event={"ID":"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077","Type":"ContainerStarted","Data":"5d2d9f0091688b69a6595fe5a2bb73523a4cff2d40b5b93d3474ba4710669f62"} Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.245861 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" event={"ID":"a0ee5f51-4db4-4713-bd3a-850996fcb555","Type":"ContainerDied","Data":"5af00aaa8e49ed571a0d4f2830ca2e72b46b059aeeef9953c3f2b5aa9765f187"} Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.245886 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af00aaa8e49ed571a0d4f2830ca2e72b46b059aeeef9953c3f2b5aa9765f187" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.245989 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.496434 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l"] Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.497449 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerName="util" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.497472 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerName="util" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.497490 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerName="pull" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.497502 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerName="pull" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.497551 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerName="extract" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.497564 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerName="extract" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.498738 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0ee5f51-4db4-4713-bd3a-850996fcb555" containerName="extract" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.511808 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l"] Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.511947 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.693368 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gql8x\" (UniqueName: \"kubernetes.io/projected/08d3015b-53e3-4714-a88f-ce216cdbf7db-kube-api-access-gql8x\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.693763 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.693809 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.795347 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.795454 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.795530 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gql8x\" (UniqueName: \"kubernetes.io/projected/08d3015b-53e3-4714-a88f-ce216cdbf7db-kube-api-access-gql8x\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.796322 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.796374 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:50 crc kubenswrapper[5117]: I0130 00:21:50.856271 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gql8x\" (UniqueName: \"kubernetes.io/projected/08d3015b-53e3-4714-a88f-ce216cdbf7db-kube-api-access-gql8x\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:51 crc kubenswrapper[5117]: I0130 00:21:51.136533 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:51 crc kubenswrapper[5117]: I0130 00:21:51.257335 5117 generic.go:358] "Generic (PLEG): container finished" podID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" containerID="a2f359e11b3299c8c57db10a14d6d9eec92b2db74e9c31c327bf65a1af208f6f" exitCode=0 Jan 30 00:21:51 crc kubenswrapper[5117]: I0130 00:21:51.257550 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" event={"ID":"e0791d08-fb28-4fed-9fc1-f4a1c7d8c077","Type":"ContainerDied","Data":"a2f359e11b3299c8c57db10a14d6d9eec92b2db74e9c31c327bf65a1af208f6f"} Jan 30 00:21:51 crc kubenswrapper[5117]: E0130 00:21:51.297748 5117 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:21:51 crc kubenswrapper[5117]: E0130 00:21:51.298380 5117 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nlg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_openshift-marketplace(e0791d08-fb28-4fed-9fc1-f4a1c7d8c077): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:21:51 crc kubenswrapper[5117]: E0130 00:21:51.299737 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:21:51 crc kubenswrapper[5117]: I0130 00:21:51.330374 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l"] Jan 30 00:21:51 crc kubenswrapper[5117]: W0130 00:21:51.335074 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d3015b_53e3_4714_a88f_ce216cdbf7db.slice/crio-f4f71bf574451988b79b8fd2c4b1e1edefc6a5c75605c9a6c2dc325f2db91606 WatchSource:0}: Error finding container f4f71bf574451988b79b8fd2c4b1e1edefc6a5c75605c9a6c2dc325f2db91606: Status 404 returned error can't find the container with id f4f71bf574451988b79b8fd2c4b1e1edefc6a5c75605c9a6c2dc325f2db91606 Jan 30 00:21:52 crc kubenswrapper[5117]: I0130 00:21:52.263575 5117 generic.go:358] "Generic (PLEG): container finished" podID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerID="e9cea13e14135d820119684eda3d51d4000029025fb3da7e1980c0d9bd8f9196" exitCode=0 Jan 30 00:21:52 crc kubenswrapper[5117]: I0130 00:21:52.263649 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" event={"ID":"08d3015b-53e3-4714-a88f-ce216cdbf7db","Type":"ContainerDied","Data":"e9cea13e14135d820119684eda3d51d4000029025fb3da7e1980c0d9bd8f9196"} Jan 30 00:21:52 crc kubenswrapper[5117]: I0130 00:21:52.263676 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" event={"ID":"08d3015b-53e3-4714-a88f-ce216cdbf7db","Type":"ContainerStarted","Data":"f4f71bf574451988b79b8fd2c4b1e1edefc6a5c75605c9a6c2dc325f2db91606"} Jan 30 00:21:52 crc kubenswrapper[5117]: E0130 00:21:52.265835 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:21:54 crc kubenswrapper[5117]: I0130 00:21:54.310863 5117 generic.go:358] "Generic (PLEG): container finished" podID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerID="b86b72de2a8a47b24ca464e2a96aa41db11c8c9a66cc565656ab15bbac1878f0" exitCode=0 Jan 30 00:21:54 crc kubenswrapper[5117]: I0130 00:21:54.310964 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" event={"ID":"08d3015b-53e3-4714-a88f-ce216cdbf7db","Type":"ContainerDied","Data":"b86b72de2a8a47b24ca464e2a96aa41db11c8c9a66cc565656ab15bbac1878f0"} Jan 30 00:21:55 crc kubenswrapper[5117]: I0130 00:21:55.317857 5117 generic.go:358] "Generic (PLEG): container finished" podID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerID="2a2816ae76db3b7d683a0341d2f4ed2786266fd231d255adba8bc1a7a345916d" exitCode=0 Jan 30 00:21:55 crc kubenswrapper[5117]: I0130 00:21:55.317931 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" event={"ID":"08d3015b-53e3-4714-a88f-ce216cdbf7db","Type":"ContainerDied","Data":"2a2816ae76db3b7d683a0341d2f4ed2786266fd231d255adba8bc1a7a345916d"} Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.721088 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.785925 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-bundle\") pod \"08d3015b-53e3-4714-a88f-ce216cdbf7db\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.785987 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-util\") pod \"08d3015b-53e3-4714-a88f-ce216cdbf7db\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.786065 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gql8x\" (UniqueName: \"kubernetes.io/projected/08d3015b-53e3-4714-a88f-ce216cdbf7db-kube-api-access-gql8x\") pod \"08d3015b-53e3-4714-a88f-ce216cdbf7db\" (UID: \"08d3015b-53e3-4714-a88f-ce216cdbf7db\") " Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.787249 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-bundle" (OuterVolumeSpecName: "bundle") pod "08d3015b-53e3-4714-a88f-ce216cdbf7db" (UID: "08d3015b-53e3-4714-a88f-ce216cdbf7db"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.810441 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d3015b-53e3-4714-a88f-ce216cdbf7db-kube-api-access-gql8x" (OuterVolumeSpecName: "kube-api-access-gql8x") pod "08d3015b-53e3-4714-a88f-ce216cdbf7db" (UID: "08d3015b-53e3-4714-a88f-ce216cdbf7db"). InnerVolumeSpecName "kube-api-access-gql8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.849796 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-util" (OuterVolumeSpecName: "util") pod "08d3015b-53e3-4714-a88f-ce216cdbf7db" (UID: "08d3015b-53e3-4714-a88f-ce216cdbf7db"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.887012 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gql8x\" (UniqueName: \"kubernetes.io/projected/08d3015b-53e3-4714-a88f-ce216cdbf7db-kube-api-access-gql8x\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.887061 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:56 crc kubenswrapper[5117]: I0130 00:21:56.887070 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08d3015b-53e3-4714-a88f-ce216cdbf7db-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:21:57 crc kubenswrapper[5117]: I0130 00:21:57.327832 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" event={"ID":"08d3015b-53e3-4714-a88f-ce216cdbf7db","Type":"ContainerDied","Data":"f4f71bf574451988b79b8fd2c4b1e1edefc6a5c75605c9a6c2dc325f2db91606"} Jan 30 00:21:57 crc kubenswrapper[5117]: I0130 00:21:57.327869 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f71bf574451988b79b8fd2c4b1e1edefc6a5c75605c9a6c2dc325f2db91606" Jan 30 00:21:57 crc kubenswrapper[5117]: I0130 00:21:57.327941 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.549610 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76"] Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.550371 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerName="pull" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.550382 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerName="pull" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.550404 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerName="extract" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.550411 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerName="extract" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.550433 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerName="util" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.550439 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerName="util" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.550536 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="08d3015b-53e3-4714-a88f-ce216cdbf7db" containerName="extract" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.558827 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.563812 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76"] Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.605631 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.605733 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.605904 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdnpq\" (UniqueName: \"kubernetes.io/projected/df52d557-84e5-4c20-85f4-751779ecdeff-kube-api-access-fdnpq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.707105 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.707156 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdnpq\" (UniqueName: \"kubernetes.io/projected/df52d557-84e5-4c20-85f4-751779ecdeff-kube-api-access-fdnpq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.707193 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.707714 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.707722 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.725256 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdnpq\" (UniqueName: \"kubernetes.io/projected/df52d557-84e5-4c20-85f4-751779ecdeff-kube-api-access-fdnpq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:58 crc kubenswrapper[5117]: I0130 00:21:58.874705 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.104574 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.112302 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.115477 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.115663 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-9mghr\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.115922 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.116198 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.124367 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29ns\" (UniqueName: \"kubernetes.io/projected/7858922f-a122-4c3c-8e82-2941f771c502-kube-api-access-v29ns\") pod \"obo-prometheus-operator-9bc85b4bf-nsdnx\" (UID: \"7858922f-a122-4c3c-8e82-2941f771c502\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.225235 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v29ns\" (UniqueName: \"kubernetes.io/projected/7858922f-a122-4c3c-8e82-2941f771c502-kube-api-access-v29ns\") pod \"obo-prometheus-operator-9bc85b4bf-nsdnx\" (UID: \"7858922f-a122-4c3c-8e82-2941f771c502\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.236771 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.245100 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.251247 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.257943 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.258005 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-wm8kb\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.273456 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.305914 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.306075 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.307039 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29ns\" (UniqueName: \"kubernetes.io/projected/7858922f-a122-4c3c-8e82-2941f771c502-kube-api-access-v29ns\") pod \"obo-prometheus-operator-9bc85b4bf-nsdnx\" (UID: \"7858922f-a122-4c3c-8e82-2941f771c502\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.326041 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16e72675-b6b0-409c-a161-7d1add8eba30-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n\" (UID: \"16e72675-b6b0-409c-a161-7d1add8eba30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.326088 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dad4104-a9d8-45e9-9eae-39a841d6bd14-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4\" (UID: \"2dad4104-a9d8-45e9-9eae-39a841d6bd14\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.326166 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dad4104-a9d8-45e9-9eae-39a841d6bd14-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4\" (UID: \"2dad4104-a9d8-45e9-9eae-39a841d6bd14\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.326189 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16e72675-b6b0-409c-a161-7d1add8eba30-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n\" (UID: \"16e72675-b6b0-409c-a161-7d1add8eba30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.422981 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.426657 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16e72675-b6b0-409c-a161-7d1add8eba30-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n\" (UID: \"16e72675-b6b0-409c-a161-7d1add8eba30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.426724 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dad4104-a9d8-45e9-9eae-39a841d6bd14-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4\" (UID: \"2dad4104-a9d8-45e9-9eae-39a841d6bd14\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.426822 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dad4104-a9d8-45e9-9eae-39a841d6bd14-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4\" (UID: \"2dad4104-a9d8-45e9-9eae-39a841d6bd14\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.426861 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16e72675-b6b0-409c-a161-7d1add8eba30-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n\" (UID: \"16e72675-b6b0-409c-a161-7d1add8eba30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: W0130 00:21:59.427421 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf52d557_84e5_4c20_85f4_751779ecdeff.slice/crio-72076b1b925cb2a24a057b1448a0eefe54bf004558ba4ff8bbe32115b2d237cf WatchSource:0}: Error finding container 72076b1b925cb2a24a057b1448a0eefe54bf004558ba4ff8bbe32115b2d237cf: Status 404 returned error can't find the container with id 72076b1b925cb2a24a057b1448a0eefe54bf004558ba4ff8bbe32115b2d237cf Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.432680 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dad4104-a9d8-45e9-9eae-39a841d6bd14-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4\" (UID: \"2dad4104-a9d8-45e9-9eae-39a841d6bd14\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.434357 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16e72675-b6b0-409c-a161-7d1add8eba30-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n\" (UID: \"16e72675-b6b0-409c-a161-7d1add8eba30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.439201 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16e72675-b6b0-409c-a161-7d1add8eba30-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n\" (UID: \"16e72675-b6b0-409c-a161-7d1add8eba30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.439513 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dad4104-a9d8-45e9-9eae-39a841d6bd14-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4\" (UID: \"2dad4104-a9d8-45e9-9eae-39a841d6bd14\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.442447 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.453888 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-g88kv"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.465010 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.468395 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-g88kv"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.474294 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.474656 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-pkzpb\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.528625 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26dbb\" (UniqueName: \"kubernetes.io/projected/2d031867-84f8-4c5b-824a-3be88a288652-kube-api-access-26dbb\") pod \"observability-operator-85c68dddb-g88kv\" (UID: \"2d031867-84f8-4c5b-824a-3be88a288652\") " pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.528720 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d031867-84f8-4c5b-824a-3be88a288652-observability-operator-tls\") pod \"observability-operator-85c68dddb-g88kv\" (UID: \"2d031867-84f8-4c5b-824a-3be88a288652\") " pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.574828 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.630137 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26dbb\" (UniqueName: \"kubernetes.io/projected/2d031867-84f8-4c5b-824a-3be88a288652-kube-api-access-26dbb\") pod \"observability-operator-85c68dddb-g88kv\" (UID: \"2d031867-84f8-4c5b-824a-3be88a288652\") " pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.631320 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d031867-84f8-4c5b-824a-3be88a288652-observability-operator-tls\") pod \"observability-operator-85c68dddb-g88kv\" (UID: \"2d031867-84f8-4c5b-824a-3be88a288652\") " pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.636203 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d031867-84f8-4c5b-824a-3be88a288652-observability-operator-tls\") pod \"observability-operator-85c68dddb-g88kv\" (UID: \"2d031867-84f8-4c5b-824a-3be88a288652\") " pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.641889 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.690493 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26dbb\" (UniqueName: \"kubernetes.io/projected/2d031867-84f8-4c5b-824a-3be88a288652-kube-api-access-26dbb\") pod \"observability-operator-85c68dddb-g88kv\" (UID: \"2d031867-84f8-4c5b-824a-3be88a288652\") " pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.691880 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-xrds8"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.712137 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.720168 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-gmdtj\"" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.726956 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-xrds8"] Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.732205 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h22jx\" (UniqueName: \"kubernetes.io/projected/e3aa9511-b055-404b-b641-6b26327a7ac4-kube-api-access-h22jx\") pod \"perses-operator-669c9f96b5-xrds8\" (UID: \"e3aa9511-b055-404b-b641-6b26327a7ac4\") " pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.732472 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3aa9511-b055-404b-b641-6b26327a7ac4-openshift-service-ca\") pod \"perses-operator-669c9f96b5-xrds8\" (UID: \"e3aa9511-b055-404b-b641-6b26327a7ac4\") " pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.792703 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.835232 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3aa9511-b055-404b-b641-6b26327a7ac4-openshift-service-ca\") pod \"perses-operator-669c9f96b5-xrds8\" (UID: \"e3aa9511-b055-404b-b641-6b26327a7ac4\") " pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.835311 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h22jx\" (UniqueName: \"kubernetes.io/projected/e3aa9511-b055-404b-b641-6b26327a7ac4-kube-api-access-h22jx\") pod \"perses-operator-669c9f96b5-xrds8\" (UID: \"e3aa9511-b055-404b-b641-6b26327a7ac4\") " pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.836617 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3aa9511-b055-404b-b641-6b26327a7ac4-openshift-service-ca\") pod \"perses-operator-669c9f96b5-xrds8\" (UID: \"e3aa9511-b055-404b-b641-6b26327a7ac4\") " pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.894070 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h22jx\" (UniqueName: \"kubernetes.io/projected/e3aa9511-b055-404b-b641-6b26327a7ac4-kube-api-access-h22jx\") pod \"perses-operator-669c9f96b5-xrds8\" (UID: \"e3aa9511-b055-404b-b641-6b26327a7ac4\") " pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:21:59 crc kubenswrapper[5117]: I0130 00:21:59.987155 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx"] Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.124997 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495542-wzq4f"] Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.135377 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.135666 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-wzq4f"] Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.140260 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpx69\" (UniqueName: \"kubernetes.io/projected/64980233-03c4-482d-bf2d-1bb9e9bc6614-kube-api-access-bpx69\") pod \"auto-csr-approver-29495542-wzq4f\" (UID: \"64980233-03c4-482d-bf2d-1bb9e9bc6614\") " pod="openshift-infra/auto-csr-approver-29495542-wzq4f" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.141193 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.141248 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.141273 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-f9hbv\"" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.148445 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.184835 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n"] Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.241625 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bpx69\" (UniqueName: \"kubernetes.io/projected/64980233-03c4-482d-bf2d-1bb9e9bc6614-kube-api-access-bpx69\") pod \"auto-csr-approver-29495542-wzq4f\" (UID: \"64980233-03c4-482d-bf2d-1bb9e9bc6614\") " pod="openshift-infra/auto-csr-approver-29495542-wzq4f" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.268382 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpx69\" (UniqueName: \"kubernetes.io/projected/64980233-03c4-482d-bf2d-1bb9e9bc6614-kube-api-access-bpx69\") pod \"auto-csr-approver-29495542-wzq4f\" (UID: \"64980233-03c4-482d-bf2d-1bb9e9bc6614\") " pod="openshift-infra/auto-csr-approver-29495542-wzq4f" Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.270065 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4"] Jan 30 00:22:00 crc kubenswrapper[5117]: W0130 00:22:00.284484 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dad4104_a9d8_45e9_9eae_39a841d6bd14.slice/crio-2fb7e0b99a39e02353a075049611b0473745009cf3fc0d768b4769afc0edf826 WatchSource:0}: Error finding container 2fb7e0b99a39e02353a075049611b0473745009cf3fc0d768b4769afc0edf826: Status 404 returned error can't find the container with id 2fb7e0b99a39e02353a075049611b0473745009cf3fc0d768b4769afc0edf826 Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.349864 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" event={"ID":"2dad4104-a9d8-45e9-9eae-39a841d6bd14","Type":"ContainerStarted","Data":"2fb7e0b99a39e02353a075049611b0473745009cf3fc0d768b4769afc0edf826"} Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.360402 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" event={"ID":"16e72675-b6b0-409c-a161-7d1add8eba30","Type":"ContainerStarted","Data":"9e524055e9793371bd8c3a24c1140eb8057dca38c09db7526a510cc74376db60"} Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.362200 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" event={"ID":"7858922f-a122-4c3c-8e82-2941f771c502","Type":"ContainerStarted","Data":"d48f6326a22e1cb782d54369b82ae87e198c8a50927711515e677c68a2266647"} Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.364903 5117 generic.go:358] "Generic (PLEG): container finished" podID="df52d557-84e5-4c20-85f4-751779ecdeff" containerID="fceb9b9619a8d902906312b6e3215e89cd6988a6f2971649e0b1323f99a7350b" exitCode=0 Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.364966 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" event={"ID":"df52d557-84e5-4c20-85f4-751779ecdeff","Type":"ContainerDied","Data":"fceb9b9619a8d902906312b6e3215e89cd6988a6f2971649e0b1323f99a7350b"} Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.364981 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" event={"ID":"df52d557-84e5-4c20-85f4-751779ecdeff","Type":"ContainerStarted","Data":"72076b1b925cb2a24a057b1448a0eefe54bf004558ba4ff8bbe32115b2d237cf"} Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.413062 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-xrds8"] Jan 30 00:22:00 crc kubenswrapper[5117]: W0130 00:22:00.420263 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3aa9511_b055_404b_b641_6b26327a7ac4.slice/crio-693589b9e5b17ac2c8eb4992ed19250630b890a27ce5acc500260e5eadb7006d WatchSource:0}: Error finding container 693589b9e5b17ac2c8eb4992ed19250630b890a27ce5acc500260e5eadb7006d: Status 404 returned error can't find the container with id 693589b9e5b17ac2c8eb4992ed19250630b890a27ce5acc500260e5eadb7006d Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.444001 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-g88kv"] Jan 30 00:22:00 crc kubenswrapper[5117]: I0130 00:22:00.464095 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" Jan 30 00:22:01 crc kubenswrapper[5117]: I0130 00:22:01.272134 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-wzq4f"] Jan 30 00:22:01 crc kubenswrapper[5117]: I0130 00:22:01.394994 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" event={"ID":"64980233-03c4-482d-bf2d-1bb9e9bc6614","Type":"ContainerStarted","Data":"f76c03eb30839012670bfd1d64827814f17dce97a9313cb9a74a035fc17c8f6e"} Jan 30 00:22:01 crc kubenswrapper[5117]: I0130 00:22:01.424932 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-xrds8" event={"ID":"e3aa9511-b055-404b-b641-6b26327a7ac4","Type":"ContainerStarted","Data":"693589b9e5b17ac2c8eb4992ed19250630b890a27ce5acc500260e5eadb7006d"} Jan 30 00:22:01 crc kubenswrapper[5117]: I0130 00:22:01.432372 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-g88kv" event={"ID":"2d031867-84f8-4c5b-824a-3be88a288652","Type":"ContainerStarted","Data":"10919862963df16bbf4b9e00c0f164139f2176deb9a18c560276b1ecb717ab81"} Jan 30 00:22:04 crc kubenswrapper[5117]: E0130 00:22:04.291485 5117 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:22:04 crc kubenswrapper[5117]: E0130 00:22:04.291888 5117 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nlg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_openshift-marketplace(e0791d08-fb28-4fed-9fc1-f4a1c7d8c077): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:22:04 crc kubenswrapper[5117]: E0130 00:22:04.296788 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:22:04 crc kubenswrapper[5117]: I0130 00:22:04.521220 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" event={"ID":"64980233-03c4-482d-bf2d-1bb9e9bc6614","Type":"ContainerStarted","Data":"07cc440485a45988bcf62dee2e6ddfcf006421300ab04f7f12c2b358b746fcac"} Jan 30 00:22:04 crc kubenswrapper[5117]: I0130 00:22:04.538152 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" podStartSLOduration=3.380738952 podStartE2EDuration="4.538115559s" podCreationTimestamp="2026-01-30 00:22:00 +0000 UTC" firstStartedPulling="2026-01-30 00:22:01.333910933 +0000 UTC m=+684.445446823" lastFinishedPulling="2026-01-30 00:22:02.49128754 +0000 UTC m=+685.602823430" observedRunningTime="2026-01-30 00:22:04.537196844 +0000 UTC m=+687.648732744" watchObservedRunningTime="2026-01-30 00:22:04.538115559 +0000 UTC m=+687.649651449" Jan 30 00:22:05 crc kubenswrapper[5117]: I0130 00:22:05.528581 5117 generic.go:358] "Generic (PLEG): container finished" podID="64980233-03c4-482d-bf2d-1bb9e9bc6614" containerID="07cc440485a45988bcf62dee2e6ddfcf006421300ab04f7f12c2b358b746fcac" exitCode=0 Jan 30 00:22:05 crc kubenswrapper[5117]: I0130 00:22:05.528696 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" event={"ID":"64980233-03c4-482d-bf2d-1bb9e9bc6614","Type":"ContainerDied","Data":"07cc440485a45988bcf62dee2e6ddfcf006421300ab04f7f12c2b358b746fcac"} Jan 30 00:22:08 crc kubenswrapper[5117]: I0130 00:22:08.095617 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" Jan 30 00:22:08 crc kubenswrapper[5117]: I0130 00:22:08.277758 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpx69\" (UniqueName: \"kubernetes.io/projected/64980233-03c4-482d-bf2d-1bb9e9bc6614-kube-api-access-bpx69\") pod \"64980233-03c4-482d-bf2d-1bb9e9bc6614\" (UID: \"64980233-03c4-482d-bf2d-1bb9e9bc6614\") " Jan 30 00:22:08 crc kubenswrapper[5117]: I0130 00:22:08.298920 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64980233-03c4-482d-bf2d-1bb9e9bc6614-kube-api-access-bpx69" (OuterVolumeSpecName: "kube-api-access-bpx69") pod "64980233-03c4-482d-bf2d-1bb9e9bc6614" (UID: "64980233-03c4-482d-bf2d-1bb9e9bc6614"). InnerVolumeSpecName "kube-api-access-bpx69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:08 crc kubenswrapper[5117]: I0130 00:22:08.379647 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bpx69\" (UniqueName: \"kubernetes.io/projected/64980233-03c4-482d-bf2d-1bb9e9bc6614-kube-api-access-bpx69\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:08 crc kubenswrapper[5117]: I0130 00:22:08.561878 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" Jan 30 00:22:08 crc kubenswrapper[5117]: I0130 00:22:08.561877 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-wzq4f" event={"ID":"64980233-03c4-482d-bf2d-1bb9e9bc6614","Type":"ContainerDied","Data":"f76c03eb30839012670bfd1d64827814f17dce97a9313cb9a74a035fc17c8f6e"} Jan 30 00:22:08 crc kubenswrapper[5117]: I0130 00:22:08.562242 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f76c03eb30839012670bfd1d64827814f17dce97a9313cb9a74a035fc17c8f6e" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.640024 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-xrds8" event={"ID":"e3aa9511-b055-404b-b641-6b26327a7ac4","Type":"ContainerStarted","Data":"04ae669c44e883673a18036adf14c7a43bde20cf13d746760d2eac6815bd7c6c"} Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.641594 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.641670 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-g88kv" event={"ID":"2d031867-84f8-4c5b-824a-3be88a288652","Type":"ContainerStarted","Data":"b95c48dab765c501d0f32a2e616a13606dd34b01f607509a63318b832213eba8"} Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.642033 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.643051 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" event={"ID":"7858922f-a122-4c3c-8e82-2941f771c502","Type":"ContainerStarted","Data":"fbfb6f3512bbd03ff2413d1a016a238b308f0b049cf1daf7cbd38847533bef73"} Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.644797 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-g88kv" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.645622 5117 generic.go:358] "Generic (PLEG): container finished" podID="df52d557-84e5-4c20-85f4-751779ecdeff" containerID="79700be0fc1ebd8f6c9fac3c7fed5478f39d40fafca8f94cf3615cd79932aada" exitCode=0 Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.645652 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" event={"ID":"df52d557-84e5-4c20-85f4-751779ecdeff","Type":"ContainerDied","Data":"79700be0fc1ebd8f6c9fac3c7fed5478f39d40fafca8f94cf3615cd79932aada"} Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.647024 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" event={"ID":"2dad4104-a9d8-45e9-9eae-39a841d6bd14","Type":"ContainerStarted","Data":"53f8bcca765a3c4e66f2e10902b5c0d31daa6250f7e7270bf10f5282eb770f8a"} Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.648460 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" event={"ID":"16e72675-b6b0-409c-a161-7d1add8eba30","Type":"ContainerStarted","Data":"4488fef703f89dedab374c615a67e70a29568610d3f73537b708681acb767fcd"} Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.670380 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-xrds8" podStartSLOduration=2.626056335 podStartE2EDuration="17.670361843s" podCreationTimestamp="2026-01-30 00:21:59 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.423879123 +0000 UTC m=+683.535415013" lastFinishedPulling="2026-01-30 00:22:15.468184621 +0000 UTC m=+698.579720521" observedRunningTime="2026-01-30 00:22:16.66666731 +0000 UTC m=+699.778203230" watchObservedRunningTime="2026-01-30 00:22:16.670361843 +0000 UTC m=+699.781897733" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.731965 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4" podStartSLOduration=2.587484655 podStartE2EDuration="17.731947938s" podCreationTimestamp="2026-01-30 00:21:59 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.303266915 +0000 UTC m=+683.414802805" lastFinishedPulling="2026-01-30 00:22:15.447730198 +0000 UTC m=+698.559266088" observedRunningTime="2026-01-30 00:22:16.727676118 +0000 UTC m=+699.839211998" watchObservedRunningTime="2026-01-30 00:22:16.731947938 +0000 UTC m=+699.843483828" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.773101 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-g88kv" podStartSLOduration=2.781386407 podStartE2EDuration="17.77308503s" podCreationTimestamp="2026-01-30 00:21:59 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.455548641 +0000 UTC m=+683.567084531" lastFinishedPulling="2026-01-30 00:22:15.447247264 +0000 UTC m=+698.558783154" observedRunningTime="2026-01-30 00:22:16.768532873 +0000 UTC m=+699.880068783" watchObservedRunningTime="2026-01-30 00:22:16.77308503 +0000 UTC m=+699.884620910" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.820440 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-nsdnx" podStartSLOduration=2.385477297 podStartE2EDuration="17.820424776s" podCreationTimestamp="2026-01-30 00:21:59 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.01246783 +0000 UTC m=+683.124003720" lastFinishedPulling="2026-01-30 00:22:15.447415309 +0000 UTC m=+698.558951199" observedRunningTime="2026-01-30 00:22:16.818738409 +0000 UTC m=+699.930274299" watchObservedRunningTime="2026-01-30 00:22:16.820424776 +0000 UTC m=+699.931960666" Jan 30 00:22:16 crc kubenswrapper[5117]: I0130 00:22:16.883543 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n" podStartSLOduration=2.566896559 podStartE2EDuration="17.883526524s" podCreationTimestamp="2026-01-30 00:21:59 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.189386445 +0000 UTC m=+683.300922335" lastFinishedPulling="2026-01-30 00:22:15.50601641 +0000 UTC m=+698.617552300" observedRunningTime="2026-01-30 00:22:16.881234089 +0000 UTC m=+699.992769979" watchObservedRunningTime="2026-01-30 00:22:16.883526524 +0000 UTC m=+699.995062414" Jan 30 00:22:17 crc kubenswrapper[5117]: I0130 00:22:17.655710 5117 generic.go:358] "Generic (PLEG): container finished" podID="df52d557-84e5-4c20-85f4-751779ecdeff" containerID="d26f47027eb8dc1f940ac200d875c3dce9194e3919e665c93c30efd428c8c7b7" exitCode=0 Jan 30 00:22:17 crc kubenswrapper[5117]: I0130 00:22:17.656626 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" event={"ID":"df52d557-84e5-4c20-85f4-751779ecdeff","Type":"ContainerDied","Data":"d26f47027eb8dc1f940ac200d875c3dce9194e3919e665c93c30efd428c8c7b7"} Jan 30 00:22:18 crc kubenswrapper[5117]: I0130 00:22:18.973779 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.031397 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-bundle\") pod \"df52d557-84e5-4c20-85f4-751779ecdeff\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.031720 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-util\") pod \"df52d557-84e5-4c20-85f4-751779ecdeff\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.032247 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-bundle" (OuterVolumeSpecName: "bundle") pod "df52d557-84e5-4c20-85f4-751779ecdeff" (UID: "df52d557-84e5-4c20-85f4-751779ecdeff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.037154 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdnpq\" (UniqueName: \"kubernetes.io/projected/df52d557-84e5-4c20-85f4-751779ecdeff-kube-api-access-fdnpq\") pod \"df52d557-84e5-4c20-85f4-751779ecdeff\" (UID: \"df52d557-84e5-4c20-85f4-751779ecdeff\") " Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.038080 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.040757 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-util" (OuterVolumeSpecName: "util") pod "df52d557-84e5-4c20-85f4-751779ecdeff" (UID: "df52d557-84e5-4c20-85f4-751779ecdeff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.046944 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df52d557-84e5-4c20-85f4-751779ecdeff-kube-api-access-fdnpq" (OuterVolumeSpecName: "kube-api-access-fdnpq") pod "df52d557-84e5-4c20-85f4-751779ecdeff" (UID: "df52d557-84e5-4c20-85f4-751779ecdeff"). InnerVolumeSpecName "kube-api-access-fdnpq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:19 crc kubenswrapper[5117]: E0130 00:22:19.056948 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.140809 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df52d557-84e5-4c20-85f4-751779ecdeff-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.140849 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdnpq\" (UniqueName: \"kubernetes.io/projected/df52d557-84e5-4c20-85f4-751779ecdeff-kube-api-access-fdnpq\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.667918 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" event={"ID":"df52d557-84e5-4c20-85f4-751779ecdeff","Type":"ContainerDied","Data":"72076b1b925cb2a24a057b1448a0eefe54bf004558ba4ff8bbe32115b2d237cf"} Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.668249 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72076b1b925cb2a24a057b1448a0eefe54bf004558ba4ff8bbe32115b2d237cf" Jan 30 00:22:19 crc kubenswrapper[5117]: I0130 00:22:19.668024 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.760769 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz"] Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761863 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="64980233-03c4-482d-bf2d-1bb9e9bc6614" containerName="oc" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761881 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="64980233-03c4-482d-bf2d-1bb9e9bc6614" containerName="oc" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761891 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df52d557-84e5-4c20-85f4-751779ecdeff" containerName="pull" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761900 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="df52d557-84e5-4c20-85f4-751779ecdeff" containerName="pull" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761917 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df52d557-84e5-4c20-85f4-751779ecdeff" containerName="util" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761926 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="df52d557-84e5-4c20-85f4-751779ecdeff" containerName="util" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761945 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df52d557-84e5-4c20-85f4-751779ecdeff" containerName="extract" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.761953 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="df52d557-84e5-4c20-85f4-751779ecdeff" containerName="extract" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.762111 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="64980233-03c4-482d-bf2d-1bb9e9bc6614" containerName="oc" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.762127 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="df52d557-84e5-4c20-85f4-751779ecdeff" containerName="extract" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.771734 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.774458 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.775512 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-bdqnq\"" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.775887 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.778109 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz"] Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.854810 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57lhz\" (UniqueName: \"kubernetes.io/projected/af950c5f-84ab-4e82-b5a4-89f4d4be27b8-kube-api-access-57lhz\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-zcrwz\" (UID: \"af950c5f-84ab-4e82-b5a4-89f4d4be27b8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.854864 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af950c5f-84ab-4e82-b5a4-89f4d4be27b8-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-zcrwz\" (UID: \"af950c5f-84ab-4e82-b5a4-89f4d4be27b8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.955556 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57lhz\" (UniqueName: \"kubernetes.io/projected/af950c5f-84ab-4e82-b5a4-89f4d4be27b8-kube-api-access-57lhz\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-zcrwz\" (UID: \"af950c5f-84ab-4e82-b5a4-89f4d4be27b8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.955890 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af950c5f-84ab-4e82-b5a4-89f4d4be27b8-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-zcrwz\" (UID: \"af950c5f-84ab-4e82-b5a4-89f4d4be27b8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.956406 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af950c5f-84ab-4e82-b5a4-89f4d4be27b8-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-zcrwz\" (UID: \"af950c5f-84ab-4e82-b5a4-89f4d4be27b8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:25 crc kubenswrapper[5117]: I0130 00:22:25.981787 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57lhz\" (UniqueName: \"kubernetes.io/projected/af950c5f-84ab-4e82-b5a4-89f4d4be27b8-kube-api-access-57lhz\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-zcrwz\" (UID: \"af950c5f-84ab-4e82-b5a4-89f4d4be27b8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:26 crc kubenswrapper[5117]: I0130 00:22:26.086872 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" Jan 30 00:22:26 crc kubenswrapper[5117]: I0130 00:22:26.324772 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz"] Jan 30 00:22:26 crc kubenswrapper[5117]: W0130 00:22:26.335904 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf950c5f_84ab_4e82_b5a4_89f4d4be27b8.slice/crio-880a231d7d245155d8cab22e8c7dfb45a64a81400307371e6d9e62c746d6c21e WatchSource:0}: Error finding container 880a231d7d245155d8cab22e8c7dfb45a64a81400307371e6d9e62c746d6c21e: Status 404 returned error can't find the container with id 880a231d7d245155d8cab22e8c7dfb45a64a81400307371e6d9e62c746d6c21e Jan 30 00:22:26 crc kubenswrapper[5117]: I0130 00:22:26.708956 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" event={"ID":"af950c5f-84ab-4e82-b5a4-89f4d4be27b8","Type":"ContainerStarted","Data":"880a231d7d245155d8cab22e8c7dfb45a64a81400307371e6d9e62c746d6c21e"} Jan 30 00:22:28 crc kubenswrapper[5117]: I0130 00:22:28.668583 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-xrds8" Jan 30 00:22:29 crc kubenswrapper[5117]: I0130 00:22:29.727362 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" event={"ID":"af950c5f-84ab-4e82-b5a4-89f4d4be27b8","Type":"ContainerStarted","Data":"2790a816cfcefd5d3217b998f65c561a4fd6cc2146dad0b8a20453a940a2befb"} Jan 30 00:22:29 crc kubenswrapper[5117]: I0130 00:22:29.759213 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-zcrwz" podStartSLOduration=1.6513004580000001 podStartE2EDuration="4.759194903s" podCreationTimestamp="2026-01-30 00:22:25 +0000 UTC" firstStartedPulling="2026-01-30 00:22:26.337719566 +0000 UTC m=+709.449255456" lastFinishedPulling="2026-01-30 00:22:29.445614011 +0000 UTC m=+712.557149901" observedRunningTime="2026-01-30 00:22:29.753734031 +0000 UTC m=+712.865269941" watchObservedRunningTime="2026-01-30 00:22:29.759194903 +0000 UTC m=+712.870730793" Jan 30 00:22:32 crc kubenswrapper[5117]: E0130 00:22:32.270636 5117 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:22:32 crc kubenswrapper[5117]: E0130 00:22:32.270891 5117 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nlg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_openshift-marketplace(e0791d08-fb28-4fed-9fc1-f4a1c7d8c077): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:22:32 crc kubenswrapper[5117]: E0130 00:22:32.272205 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.467309 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-zp4f2"] Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.475919 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-zp4f2"] Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.476113 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.478580 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-298pw\"" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.478933 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.479628 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.584676 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhs8\" (UniqueName: \"kubernetes.io/projected/d9d4730d-04fa-4c8e-a240-3d3a540746d2-kube-api-access-7vhs8\") pod \"cert-manager-webhook-597b96b99b-zp4f2\" (UID: \"d9d4730d-04fa-4c8e-a240-3d3a540746d2\") " pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.584800 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d4730d-04fa-4c8e-a240-3d3a540746d2-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-zp4f2\" (UID: \"d9d4730d-04fa-4c8e-a240-3d3a540746d2\") " pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.686363 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d4730d-04fa-4c8e-a240-3d3a540746d2-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-zp4f2\" (UID: \"d9d4730d-04fa-4c8e-a240-3d3a540746d2\") " pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.686456 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhs8\" (UniqueName: \"kubernetes.io/projected/d9d4730d-04fa-4c8e-a240-3d3a540746d2-kube-api-access-7vhs8\") pod \"cert-manager-webhook-597b96b99b-zp4f2\" (UID: \"d9d4730d-04fa-4c8e-a240-3d3a540746d2\") " pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.719883 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d4730d-04fa-4c8e-a240-3d3a540746d2-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-zp4f2\" (UID: \"d9d4730d-04fa-4c8e-a240-3d3a540746d2\") " pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.732405 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhs8\" (UniqueName: \"kubernetes.io/projected/d9d4730d-04fa-4c8e-a240-3d3a540746d2-kube-api-access-7vhs8\") pod \"cert-manager-webhook-597b96b99b-zp4f2\" (UID: \"d9d4730d-04fa-4c8e-a240-3d3a540746d2\") " pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:34 crc kubenswrapper[5117]: I0130 00:22:34.807775 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:35 crc kubenswrapper[5117]: I0130 00:22:35.081922 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-zp4f2"] Jan 30 00:22:35 crc kubenswrapper[5117]: I0130 00:22:35.762827 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" event={"ID":"d9d4730d-04fa-4c8e-a240-3d3a540746d2","Type":"ContainerStarted","Data":"d7ca37b023eff4266f1ac6c344722937c3572c2dd7a602d9cbb8825f50e0213d"} Jan 30 00:22:36 crc kubenswrapper[5117]: I0130 00:22:36.978182 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-sfwsc"] Jan 30 00:22:36 crc kubenswrapper[5117]: I0130 00:22:36.989224 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-sfwsc"] Jan 30 00:22:36 crc kubenswrapper[5117]: I0130 00:22:36.989357 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:36 crc kubenswrapper[5117]: I0130 00:22:36.992222 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-bzcxl\"" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.026566 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ebad27-09c5-4752-bbae-2bd38d69f426-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-sfwsc\" (UID: \"80ebad27-09c5-4752-bbae-2bd38d69f426\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.026615 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhmb\" (UniqueName: \"kubernetes.io/projected/80ebad27-09c5-4752-bbae-2bd38d69f426-kube-api-access-9bhmb\") pod \"cert-manager-cainjector-8966b78d4-sfwsc\" (UID: \"80ebad27-09c5-4752-bbae-2bd38d69f426\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.127418 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ebad27-09c5-4752-bbae-2bd38d69f426-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-sfwsc\" (UID: \"80ebad27-09c5-4752-bbae-2bd38d69f426\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.127472 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhmb\" (UniqueName: \"kubernetes.io/projected/80ebad27-09c5-4752-bbae-2bd38d69f426-kube-api-access-9bhmb\") pod \"cert-manager-cainjector-8966b78d4-sfwsc\" (UID: \"80ebad27-09c5-4752-bbae-2bd38d69f426\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.147764 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhmb\" (UniqueName: \"kubernetes.io/projected/80ebad27-09c5-4752-bbae-2bd38d69f426-kube-api-access-9bhmb\") pod \"cert-manager-cainjector-8966b78d4-sfwsc\" (UID: \"80ebad27-09c5-4752-bbae-2bd38d69f426\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.154990 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ebad27-09c5-4752-bbae-2bd38d69f426-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-sfwsc\" (UID: \"80ebad27-09c5-4752-bbae-2bd38d69f426\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.309506 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.582879 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-sfwsc"] Jan 30 00:22:37 crc kubenswrapper[5117]: I0130 00:22:37.776979 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" event={"ID":"80ebad27-09c5-4752-bbae-2bd38d69f426","Type":"ContainerStarted","Data":"748f72174659d64ba54dbdc6d9efd0bf313c90c4bf6e886b038c261240200fe1"} Jan 30 00:22:40 crc kubenswrapper[5117]: I0130 00:22:40.813153 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" event={"ID":"d9d4730d-04fa-4c8e-a240-3d3a540746d2","Type":"ContainerStarted","Data":"183f10cea4c79074ee673934311652dc8d8f5b4753c38ab539c227ba79cc773a"} Jan 30 00:22:40 crc kubenswrapper[5117]: I0130 00:22:40.814038 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:40 crc kubenswrapper[5117]: I0130 00:22:40.815804 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" event={"ID":"80ebad27-09c5-4752-bbae-2bd38d69f426","Type":"ContainerStarted","Data":"4e7704262cf0c00392fc83ded4104ba3320f36b2d67dbf4f3fcff5f777bbe5ce"} Jan 30 00:22:40 crc kubenswrapper[5117]: I0130 00:22:40.832122 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" podStartSLOduration=2.168937181 podStartE2EDuration="6.832083542s" podCreationTimestamp="2026-01-30 00:22:34 +0000 UTC" firstStartedPulling="2026-01-30 00:22:35.087920259 +0000 UTC m=+718.199456149" lastFinishedPulling="2026-01-30 00:22:39.75106663 +0000 UTC m=+722.862602510" observedRunningTime="2026-01-30 00:22:40.829099298 +0000 UTC m=+723.940635188" watchObservedRunningTime="2026-01-30 00:22:40.832083542 +0000 UTC m=+723.943619492" Jan 30 00:22:40 crc kubenswrapper[5117]: I0130 00:22:40.848180 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-sfwsc" podStartSLOduration=2.697844885 podStartE2EDuration="4.84814277s" podCreationTimestamp="2026-01-30 00:22:36 +0000 UTC" firstStartedPulling="2026-01-30 00:22:37.600214969 +0000 UTC m=+720.711750859" lastFinishedPulling="2026-01-30 00:22:39.750512814 +0000 UTC m=+722.862048744" observedRunningTime="2026-01-30 00:22:40.843433599 +0000 UTC m=+723.954969489" watchObservedRunningTime="2026-01-30 00:22:40.84814277 +0000 UTC m=+723.959678690" Jan 30 00:22:45 crc kubenswrapper[5117]: E0130 00:22:45.039313 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:22:46 crc kubenswrapper[5117]: I0130 00:22:46.826363 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-zp4f2" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.406569 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-9r65c"] Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.448511 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-9r65c"] Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.448716 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.453359 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-rbbr5\"" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.498159 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbfccfb2-32b4-4263-aa31-c85941fc2e0a-bound-sa-token\") pod \"cert-manager-759f64656b-9r65c\" (UID: \"dbfccfb2-32b4-4263-aa31-c85941fc2e0a\") " pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.498314 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbwb\" (UniqueName: \"kubernetes.io/projected/dbfccfb2-32b4-4263-aa31-c85941fc2e0a-kube-api-access-2xbwb\") pod \"cert-manager-759f64656b-9r65c\" (UID: \"dbfccfb2-32b4-4263-aa31-c85941fc2e0a\") " pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.600159 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbwb\" (UniqueName: \"kubernetes.io/projected/dbfccfb2-32b4-4263-aa31-c85941fc2e0a-kube-api-access-2xbwb\") pod \"cert-manager-759f64656b-9r65c\" (UID: \"dbfccfb2-32b4-4263-aa31-c85941fc2e0a\") " pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.600359 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbfccfb2-32b4-4263-aa31-c85941fc2e0a-bound-sa-token\") pod \"cert-manager-759f64656b-9r65c\" (UID: \"dbfccfb2-32b4-4263-aa31-c85941fc2e0a\") " pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.621096 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbfccfb2-32b4-4263-aa31-c85941fc2e0a-bound-sa-token\") pod \"cert-manager-759f64656b-9r65c\" (UID: \"dbfccfb2-32b4-4263-aa31-c85941fc2e0a\") " pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.622001 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbwb\" (UniqueName: \"kubernetes.io/projected/dbfccfb2-32b4-4263-aa31-c85941fc2e0a-kube-api-access-2xbwb\") pod \"cert-manager-759f64656b-9r65c\" (UID: \"dbfccfb2-32b4-4263-aa31-c85941fc2e0a\") " pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.772635 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-9r65c" Jan 30 00:22:53 crc kubenswrapper[5117]: W0130 00:22:53.991280 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbfccfb2_32b4_4263_aa31_c85941fc2e0a.slice/crio-1162ebe660b144a4ef7f1b38a1b587cfdf23683ad6eb7d0157ec9daa49179093 WatchSource:0}: Error finding container 1162ebe660b144a4ef7f1b38a1b587cfdf23683ad6eb7d0157ec9daa49179093: Status 404 returned error can't find the container with id 1162ebe660b144a4ef7f1b38a1b587cfdf23683ad6eb7d0157ec9daa49179093 Jan 30 00:22:53 crc kubenswrapper[5117]: I0130 00:22:53.991301 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-9r65c"] Jan 30 00:22:54 crc kubenswrapper[5117]: I0130 00:22:54.925962 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-9r65c" event={"ID":"dbfccfb2-32b4-4263-aa31-c85941fc2e0a","Type":"ContainerStarted","Data":"3fc39eb31cf82e3e3b6086d545f265e553910a00f8aa8a2330125672ea9992ec"} Jan 30 00:22:54 crc kubenswrapper[5117]: I0130 00:22:54.927447 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-9r65c" event={"ID":"dbfccfb2-32b4-4263-aa31-c85941fc2e0a","Type":"ContainerStarted","Data":"1162ebe660b144a4ef7f1b38a1b587cfdf23683ad6eb7d0157ec9daa49179093"} Jan 30 00:22:54 crc kubenswrapper[5117]: I0130 00:22:54.947683 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-9r65c" podStartSLOduration=1.947656483 podStartE2EDuration="1.947656483s" podCreationTimestamp="2026-01-30 00:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:22:54.9414594 +0000 UTC m=+738.052995340" watchObservedRunningTime="2026-01-30 00:22:54.947656483 +0000 UTC m=+738.059192383" Jan 30 00:22:59 crc kubenswrapper[5117]: E0130 00:22:59.044834 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.419636 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m4ht4"] Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.432463 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.435203 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m4ht4"] Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.554265 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhvr9\" (UniqueName: \"kubernetes.io/projected/34af4482-ac2d-4298-a01e-c00f592f6d64-kube-api-access-dhvr9\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.554315 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-catalog-content\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.554352 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-utilities\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.655652 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhvr9\" (UniqueName: \"kubernetes.io/projected/34af4482-ac2d-4298-a01e-c00f592f6d64-kube-api-access-dhvr9\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.655721 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-catalog-content\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.655754 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-utilities\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.656185 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-catalog-content\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.656249 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-utilities\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.681152 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhvr9\" (UniqueName: \"kubernetes.io/projected/34af4482-ac2d-4298-a01e-c00f592f6d64-kube-api-access-dhvr9\") pod \"community-operators-m4ht4\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:03 crc kubenswrapper[5117]: I0130 00:23:03.755155 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:04 crc kubenswrapper[5117]: I0130 00:23:04.022598 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m4ht4"] Jan 30 00:23:04 crc kubenswrapper[5117]: I0130 00:23:04.555179 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:23:04 crc kubenswrapper[5117]: I0130 00:23:04.556538 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:23:05 crc kubenswrapper[5117]: I0130 00:23:05.016907 5117 generic.go:358] "Generic (PLEG): container finished" podID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerID="df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da" exitCode=0 Jan 30 00:23:05 crc kubenswrapper[5117]: I0130 00:23:05.016988 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4ht4" event={"ID":"34af4482-ac2d-4298-a01e-c00f592f6d64","Type":"ContainerDied","Data":"df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da"} Jan 30 00:23:05 crc kubenswrapper[5117]: I0130 00:23:05.017032 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4ht4" event={"ID":"34af4482-ac2d-4298-a01e-c00f592f6d64","Type":"ContainerStarted","Data":"0d49e71dc53aeb305cb82518fe6aaea1c15cdbe28b68e17e20dbb593b571051b"} Jan 30 00:23:06 crc kubenswrapper[5117]: I0130 00:23:06.037258 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4ht4" event={"ID":"34af4482-ac2d-4298-a01e-c00f592f6d64","Type":"ContainerStarted","Data":"a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574"} Jan 30 00:23:07 crc kubenswrapper[5117]: I0130 00:23:07.049578 5117 generic.go:358] "Generic (PLEG): container finished" podID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerID="a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574" exitCode=0 Jan 30 00:23:07 crc kubenswrapper[5117]: I0130 00:23:07.049657 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4ht4" event={"ID":"34af4482-ac2d-4298-a01e-c00f592f6d64","Type":"ContainerDied","Data":"a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574"} Jan 30 00:23:08 crc kubenswrapper[5117]: I0130 00:23:08.064264 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4ht4" event={"ID":"34af4482-ac2d-4298-a01e-c00f592f6d64","Type":"ContainerStarted","Data":"96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff"} Jan 30 00:23:08 crc kubenswrapper[5117]: I0130 00:23:08.093414 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m4ht4" podStartSLOduration=4.320680822 podStartE2EDuration="5.093388749s" podCreationTimestamp="2026-01-30 00:23:03 +0000 UTC" firstStartedPulling="2026-01-30 00:23:05.018076665 +0000 UTC m=+748.129612565" lastFinishedPulling="2026-01-30 00:23:05.790784572 +0000 UTC m=+748.902320492" observedRunningTime="2026-01-30 00:23:08.089309625 +0000 UTC m=+751.200845555" watchObservedRunningTime="2026-01-30 00:23:08.093388749 +0000 UTC m=+751.204924669" Jan 30 00:23:13 crc kubenswrapper[5117]: I0130 00:23:13.755831 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:13 crc kubenswrapper[5117]: I0130 00:23:13.756553 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:13 crc kubenswrapper[5117]: I0130 00:23:13.826603 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:14 crc kubenswrapper[5117]: I0130 00:23:14.167598 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:14 crc kubenswrapper[5117]: I0130 00:23:14.232389 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m4ht4"] Jan 30 00:23:14 crc kubenswrapper[5117]: E0130 00:23:14.284269 5117 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:23:14 crc kubenswrapper[5117]: E0130 00:23:14.284453 5117 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nlg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_openshift-marketplace(e0791d08-fb28-4fed-9fc1-f4a1c7d8c077): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:23:14 crc kubenswrapper[5117]: E0130 00:23:14.285843 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.126775 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m4ht4" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="registry-server" containerID="cri-o://96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff" gracePeriod=2 Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.479846 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vb2gl"] Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.499942 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.524936 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vb2gl"] Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.542704 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.569713 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-utilities\") pod \"34af4482-ac2d-4298-a01e-c00f592f6d64\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.570081 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhvr9\" (UniqueName: \"kubernetes.io/projected/34af4482-ac2d-4298-a01e-c00f592f6d64-kube-api-access-dhvr9\") pod \"34af4482-ac2d-4298-a01e-c00f592f6d64\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.570220 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-catalog-content\") pod \"34af4482-ac2d-4298-a01e-c00f592f6d64\" (UID: \"34af4482-ac2d-4298-a01e-c00f592f6d64\") " Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.570434 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f646r\" (UniqueName: \"kubernetes.io/projected/aac8ac91-5044-4648-92fd-ec0396c783b9-kube-api-access-f646r\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.570487 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-catalog-content\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.570744 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-utilities\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.571591 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-utilities" (OuterVolumeSpecName: "utilities") pod "34af4482-ac2d-4298-a01e-c00f592f6d64" (UID: "34af4482-ac2d-4298-a01e-c00f592f6d64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.582611 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34af4482-ac2d-4298-a01e-c00f592f6d64-kube-api-access-dhvr9" (OuterVolumeSpecName: "kube-api-access-dhvr9") pod "34af4482-ac2d-4298-a01e-c00f592f6d64" (UID: "34af4482-ac2d-4298-a01e-c00f592f6d64"). InnerVolumeSpecName "kube-api-access-dhvr9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.640146 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34af4482-ac2d-4298-a01e-c00f592f6d64" (UID: "34af4482-ac2d-4298-a01e-c00f592f6d64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.673167 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f646r\" (UniqueName: \"kubernetes.io/projected/aac8ac91-5044-4648-92fd-ec0396c783b9-kube-api-access-f646r\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.673709 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-catalog-content\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.673896 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-utilities\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.674644 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhvr9\" (UniqueName: \"kubernetes.io/projected/34af4482-ac2d-4298-a01e-c00f592f6d64-kube-api-access-dhvr9\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.674802 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.674913 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af4482-ac2d-4298-a01e-c00f592f6d64-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.674563 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-catalog-content\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.674567 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-utilities\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.696078 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f646r\" (UniqueName: \"kubernetes.io/projected/aac8ac91-5044-4648-92fd-ec0396c783b9-kube-api-access-f646r\") pod \"certified-operators-vb2gl\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:16 crc kubenswrapper[5117]: I0130 00:23:16.867659 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.130577 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vb2gl"] Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.134571 5117 generic.go:358] "Generic (PLEG): container finished" podID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerID="96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff" exitCode=0 Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.134658 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4ht4" event={"ID":"34af4482-ac2d-4298-a01e-c00f592f6d64","Type":"ContainerDied","Data":"96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff"} Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.135148 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4ht4" event={"ID":"34af4482-ac2d-4298-a01e-c00f592f6d64","Type":"ContainerDied","Data":"0d49e71dc53aeb305cb82518fe6aaea1c15cdbe28b68e17e20dbb593b571051b"} Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.135179 5117 scope.go:117] "RemoveContainer" containerID="96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.134761 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4ht4" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.159630 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m4ht4"] Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.163157 5117 scope.go:117] "RemoveContainer" containerID="a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.166614 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m4ht4"] Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.182443 5117 scope.go:117] "RemoveContainer" containerID="df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.198518 5117 scope.go:117] "RemoveContainer" containerID="96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff" Jan 30 00:23:17 crc kubenswrapper[5117]: E0130 00:23:17.198931 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff\": container with ID starting with 96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff not found: ID does not exist" containerID="96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.198981 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff"} err="failed to get container status \"96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff\": rpc error: code = NotFound desc = could not find container \"96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff\": container with ID starting with 96d697399932bf14488a5f8eaaf7b36609f66ac9329b7eb24a0c9e4240d976ff not found: ID does not exist" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.199012 5117 scope.go:117] "RemoveContainer" containerID="a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574" Jan 30 00:23:17 crc kubenswrapper[5117]: E0130 00:23:17.199419 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574\": container with ID starting with a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574 not found: ID does not exist" containerID="a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.199478 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574"} err="failed to get container status \"a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574\": rpc error: code = NotFound desc = could not find container \"a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574\": container with ID starting with a6274111495c5172e9a0a15cc9c9a932bd4d128fbc4db15a15854938893eb574 not found: ID does not exist" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.199527 5117 scope.go:117] "RemoveContainer" containerID="df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da" Jan 30 00:23:17 crc kubenswrapper[5117]: E0130 00:23:17.200004 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da\": container with ID starting with df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da not found: ID does not exist" containerID="df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da" Jan 30 00:23:17 crc kubenswrapper[5117]: I0130 00:23:17.200054 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da"} err="failed to get container status \"df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da\": rpc error: code = NotFound desc = could not find container \"df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da\": container with ID starting with df6b9a6b9c90037b90f6d3c0e08fa64239f43af0a1608c051ae05c98fcb071da not found: ID does not exist" Jan 30 00:23:18 crc kubenswrapper[5117]: I0130 00:23:18.148833 5117 generic.go:358] "Generic (PLEG): container finished" podID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerID="7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26" exitCode=0 Jan 30 00:23:18 crc kubenswrapper[5117]: I0130 00:23:18.148910 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vb2gl" event={"ID":"aac8ac91-5044-4648-92fd-ec0396c783b9","Type":"ContainerDied","Data":"7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26"} Jan 30 00:23:18 crc kubenswrapper[5117]: I0130 00:23:18.148995 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vb2gl" event={"ID":"aac8ac91-5044-4648-92fd-ec0396c783b9","Type":"ContainerStarted","Data":"4f20a41e625f9d33dfc1e0ec0905c8800321bcdcbdc8e9682531a0a0de88f670"} Jan 30 00:23:19 crc kubenswrapper[5117]: I0130 00:23:19.050606 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" path="/var/lib/kubelet/pods/34af4482-ac2d-4298-a01e-c00f592f6d64/volumes" Jan 30 00:23:20 crc kubenswrapper[5117]: I0130 00:23:20.172042 5117 generic.go:358] "Generic (PLEG): container finished" podID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerID="07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95" exitCode=0 Jan 30 00:23:20 crc kubenswrapper[5117]: I0130 00:23:20.172124 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vb2gl" event={"ID":"aac8ac91-5044-4648-92fd-ec0396c783b9","Type":"ContainerDied","Data":"07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95"} Jan 30 00:23:21 crc kubenswrapper[5117]: I0130 00:23:21.184169 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vb2gl" event={"ID":"aac8ac91-5044-4648-92fd-ec0396c783b9","Type":"ContainerStarted","Data":"99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5"} Jan 30 00:23:21 crc kubenswrapper[5117]: I0130 00:23:21.209384 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vb2gl" podStartSLOduration=4.125736637 podStartE2EDuration="5.209366342s" podCreationTimestamp="2026-01-30 00:23:16 +0000 UTC" firstStartedPulling="2026-01-30 00:23:18.150686012 +0000 UTC m=+761.262221942" lastFinishedPulling="2026-01-30 00:23:19.234315717 +0000 UTC m=+762.345851647" observedRunningTime="2026-01-30 00:23:21.208056376 +0000 UTC m=+764.319592306" watchObservedRunningTime="2026-01-30 00:23:21.209366342 +0000 UTC m=+764.320902242" Jan 30 00:23:26 crc kubenswrapper[5117]: I0130 00:23:26.867820 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:26 crc kubenswrapper[5117]: I0130 00:23:26.868402 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:26 crc kubenswrapper[5117]: I0130 00:23:26.932894 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:27 crc kubenswrapper[5117]: I0130 00:23:27.280778 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:27 crc kubenswrapper[5117]: I0130 00:23:27.327231 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vb2gl"] Jan 30 00:23:28 crc kubenswrapper[5117]: E0130 00:23:28.040553 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.236762 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vb2gl" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="registry-server" containerID="cri-o://99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5" gracePeriod=2 Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.681597 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.742979 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f646r\" (UniqueName: \"kubernetes.io/projected/aac8ac91-5044-4648-92fd-ec0396c783b9-kube-api-access-f646r\") pod \"aac8ac91-5044-4648-92fd-ec0396c783b9\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.743151 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-catalog-content\") pod \"aac8ac91-5044-4648-92fd-ec0396c783b9\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.743229 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-utilities\") pod \"aac8ac91-5044-4648-92fd-ec0396c783b9\" (UID: \"aac8ac91-5044-4648-92fd-ec0396c783b9\") " Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.744399 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-utilities" (OuterVolumeSpecName: "utilities") pod "aac8ac91-5044-4648-92fd-ec0396c783b9" (UID: "aac8ac91-5044-4648-92fd-ec0396c783b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.752573 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac8ac91-5044-4648-92fd-ec0396c783b9-kube-api-access-f646r" (OuterVolumeSpecName: "kube-api-access-f646r") pod "aac8ac91-5044-4648-92fd-ec0396c783b9" (UID: "aac8ac91-5044-4648-92fd-ec0396c783b9"). InnerVolumeSpecName "kube-api-access-f646r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.846133 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f646r\" (UniqueName: \"kubernetes.io/projected/aac8ac91-5044-4648-92fd-ec0396c783b9-kube-api-access-f646r\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.846216 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.873827 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aac8ac91-5044-4648-92fd-ec0396c783b9" (UID: "aac8ac91-5044-4648-92fd-ec0396c783b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:29 crc kubenswrapper[5117]: I0130 00:23:29.947675 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac8ac91-5044-4648-92fd-ec0396c783b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.248896 5117 generic.go:358] "Generic (PLEG): container finished" podID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerID="99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5" exitCode=0 Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.249059 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vb2gl" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.249077 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vb2gl" event={"ID":"aac8ac91-5044-4648-92fd-ec0396c783b9","Type":"ContainerDied","Data":"99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5"} Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.249133 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vb2gl" event={"ID":"aac8ac91-5044-4648-92fd-ec0396c783b9","Type":"ContainerDied","Data":"4f20a41e625f9d33dfc1e0ec0905c8800321bcdcbdc8e9682531a0a0de88f670"} Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.249173 5117 scope.go:117] "RemoveContainer" containerID="99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.278336 5117 scope.go:117] "RemoveContainer" containerID="07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.307200 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vb2gl"] Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.314904 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vb2gl"] Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.329543 5117 scope.go:117] "RemoveContainer" containerID="7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.363219 5117 scope.go:117] "RemoveContainer" containerID="99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5" Jan 30 00:23:30 crc kubenswrapper[5117]: E0130 00:23:30.363996 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5\": container with ID starting with 99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5 not found: ID does not exist" containerID="99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.364090 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5"} err="failed to get container status \"99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5\": rpc error: code = NotFound desc = could not find container \"99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5\": container with ID starting with 99c4b2d2de0ed5804094ceca3cbadf0f7dc97a17a0630d7713d1908bdf8f16b5 not found: ID does not exist" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.364143 5117 scope.go:117] "RemoveContainer" containerID="07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95" Jan 30 00:23:30 crc kubenswrapper[5117]: E0130 00:23:30.364889 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95\": container with ID starting with 07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95 not found: ID does not exist" containerID="07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.364954 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95"} err="failed to get container status \"07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95\": rpc error: code = NotFound desc = could not find container \"07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95\": container with ID starting with 07667343b1731543a183e54443dbdc1db54e82a7e997109cd4cc912385567e95 not found: ID does not exist" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.364994 5117 scope.go:117] "RemoveContainer" containerID="7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26" Jan 30 00:23:30 crc kubenswrapper[5117]: E0130 00:23:30.365666 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26\": container with ID starting with 7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26 not found: ID does not exist" containerID="7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26" Jan 30 00:23:30 crc kubenswrapper[5117]: I0130 00:23:30.365737 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26"} err="failed to get container status \"7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26\": rpc error: code = NotFound desc = could not find container \"7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26\": container with ID starting with 7a7d76d8cbad7847058f2eef330142b9a7a551d9f23db2bd5003e8af14157d26 not found: ID does not exist" Jan 30 00:23:31 crc kubenswrapper[5117]: I0130 00:23:31.048673 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" path="/var/lib/kubelet/pods/aac8ac91-5044-4648-92fd-ec0396c783b9/volumes" Jan 30 00:23:34 crc kubenswrapper[5117]: I0130 00:23:34.555109 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:23:34 crc kubenswrapper[5117]: I0130 00:23:34.555521 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:23:39 crc kubenswrapper[5117]: E0130 00:23:39.047264 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:23:51 crc kubenswrapper[5117]: E0130 00:23:51.039050 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.145680 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kjdfn"] Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150426 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="extract-utilities" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150461 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="extract-utilities" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150479 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="registry-server" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150487 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="registry-server" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150499 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="extract-content" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150505 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="extract-content" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150518 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="extract-content" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150525 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="extract-content" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150543 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="registry-server" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150550 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="registry-server" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150567 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="extract-utilities" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150576 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="extract-utilities" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150734 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="aac8ac91-5044-4648-92fd-ec0396c783b9" containerName="registry-server" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.150760 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="34af4482-ac2d-4298-a01e-c00f592f6d64" containerName="registry-server" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.159340 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kjdfn"] Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.159507 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kjdfn" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.162495 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.163231 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-f9hbv\"" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.167579 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.232465 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2xz\" (UniqueName: \"kubernetes.io/projected/0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b-kube-api-access-7b2xz\") pod \"auto-csr-approver-29495544-kjdfn\" (UID: \"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b\") " pod="openshift-infra/auto-csr-approver-29495544-kjdfn" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.333837 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2xz\" (UniqueName: \"kubernetes.io/projected/0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b-kube-api-access-7b2xz\") pod \"auto-csr-approver-29495544-kjdfn\" (UID: \"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b\") " pod="openshift-infra/auto-csr-approver-29495544-kjdfn" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.360580 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2xz\" (UniqueName: \"kubernetes.io/projected/0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b-kube-api-access-7b2xz\") pod \"auto-csr-approver-29495544-kjdfn\" (UID: \"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b\") " pod="openshift-infra/auto-csr-approver-29495544-kjdfn" Jan 30 00:24:00 crc kubenswrapper[5117]: I0130 00:24:00.491601 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kjdfn" Jan 30 00:24:01 crc kubenswrapper[5117]: I0130 00:24:01.379891 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kjdfn"] Jan 30 00:24:01 crc kubenswrapper[5117]: W0130 00:24:01.379926 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f1d026f_8ed9_4f1d_be42_e27ea53a5f2b.slice/crio-b96cdc4160dee50961024c879e4095a911e753d2f3ed19fcabf0475e046afaad WatchSource:0}: Error finding container b96cdc4160dee50961024c879e4095a911e753d2f3ed19fcabf0475e046afaad: Status 404 returned error can't find the container with id b96cdc4160dee50961024c879e4095a911e753d2f3ed19fcabf0475e046afaad Jan 30 00:24:01 crc kubenswrapper[5117]: I0130 00:24:01.479488 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kjdfn" event={"ID":"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b","Type":"ContainerStarted","Data":"b96cdc4160dee50961024c879e4095a911e753d2f3ed19fcabf0475e046afaad"} Jan 30 00:24:03 crc kubenswrapper[5117]: E0130 00:24:03.039289 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:24:03 crc kubenswrapper[5117]: I0130 00:24:03.498535 5117 generic.go:358] "Generic (PLEG): container finished" podID="0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b" containerID="62a97bce7f26f22bf6ebb5c58383db6a7d4c952e0449c5e3c3032c95d8c9625f" exitCode=0 Jan 30 00:24:03 crc kubenswrapper[5117]: I0130 00:24:03.498623 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kjdfn" event={"ID":"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b","Type":"ContainerDied","Data":"62a97bce7f26f22bf6ebb5c58383db6a7d4c952e0449c5e3c3032c95d8c9625f"} Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.555811 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.559085 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.559345 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.560470 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"073defdd7077d53303dbf34291dabf3d999fa2598157f63385c19d2858c64243"} pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.560863 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" containerID="cri-o://073defdd7077d53303dbf34291dabf3d999fa2598157f63385c19d2858c64243" gracePeriod=600 Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.835950 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kjdfn" Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.896743 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b2xz\" (UniqueName: \"kubernetes.io/projected/0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b-kube-api-access-7b2xz\") pod \"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b\" (UID: \"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b\") " Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.906886 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b-kube-api-access-7b2xz" (OuterVolumeSpecName: "kube-api-access-7b2xz") pod "0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b" (UID: "0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b"). InnerVolumeSpecName "kube-api-access-7b2xz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:04 crc kubenswrapper[5117]: I0130 00:24:04.998981 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7b2xz\" (UniqueName: \"kubernetes.io/projected/0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b-kube-api-access-7b2xz\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.515857 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kjdfn" Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.515872 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kjdfn" event={"ID":"0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b","Type":"ContainerDied","Data":"b96cdc4160dee50961024c879e4095a911e753d2f3ed19fcabf0475e046afaad"} Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.516404 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b96cdc4160dee50961024c879e4095a911e753d2f3ed19fcabf0475e046afaad" Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.519339 5117 generic.go:358] "Generic (PLEG): container finished" podID="3965caad-c581-45b3-88e0-99b4039659c5" containerID="073defdd7077d53303dbf34291dabf3d999fa2598157f63385c19d2858c64243" exitCode=0 Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.519397 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerDied","Data":"073defdd7077d53303dbf34291dabf3d999fa2598157f63385c19d2858c64243"} Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.519431 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"54d3a6365c99493f08f59c805da853cdb6dce1209ccd8d5d1aa4a59d4a29f37d"} Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.519449 5117 scope.go:117] "RemoveContainer" containerID="8fa20a680f842b91be2f212674ae09218d15dca3e62b236ca705f6ad0d0dc01e" Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.913554 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-6k29d"] Jan 30 00:24:05 crc kubenswrapper[5117]: I0130 00:24:05.920847 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495538-6k29d"] Jan 30 00:24:07 crc kubenswrapper[5117]: I0130 00:24:07.046839 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0ab2d0-bc28-4ced-a7a8-1bd939549e46" path="/var/lib/kubelet/pods/cd0ab2d0-bc28-4ced-a7a8-1bd939549e46/volumes" Jan 30 00:24:15 crc kubenswrapper[5117]: E0130 00:24:15.040886 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:24:29 crc kubenswrapper[5117]: E0130 00:24:29.044303 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:24:39 crc kubenswrapper[5117]: I0130 00:24:39.737250 5117 scope.go:117] "RemoveContainer" containerID="b13426a65e6aaa4e64851c834bb5a6bd91e87c56207f5535845d207bdadde86a" Jan 30 00:24:42 crc kubenswrapper[5117]: E0130 00:24:42.111002 5117 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:24:42 crc kubenswrapper[5117]: E0130 00:24:42.111797 5117 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nlg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_openshift-marketplace(e0791d08-fb28-4fed-9fc1-f4a1c7d8c077): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:24:42 crc kubenswrapper[5117]: E0130 00:24:42.112990 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:24:54 crc kubenswrapper[5117]: E0130 00:24:54.040652 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.390247 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7wlw4/must-gather-jksxc"] Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.391459 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b" containerName="oc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.391480 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b" containerName="oc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.391672 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b" containerName="oc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.399034 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.401310 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-7wlw4\"/\"default-dockercfg-pq4dk\"" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.404063 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-7wlw4\"/\"kube-root-ca.crt\"" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.404068 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-7wlw4\"/\"openshift-service-ca.crt\"" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.418098 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7wlw4/must-gather-jksxc"] Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.475033 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5rnb\" (UniqueName: \"kubernetes.io/projected/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-kube-api-access-p5rnb\") pod \"must-gather-jksxc\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.475139 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-must-gather-output\") pod \"must-gather-jksxc\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.576968 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rnb\" (UniqueName: \"kubernetes.io/projected/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-kube-api-access-p5rnb\") pod \"must-gather-jksxc\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.577063 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-must-gather-output\") pod \"must-gather-jksxc\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.577545 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-must-gather-output\") pod \"must-gather-jksxc\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.600461 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5rnb\" (UniqueName: \"kubernetes.io/projected/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-kube-api-access-p5rnb\") pod \"must-gather-jksxc\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.722590 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:25:03 crc kubenswrapper[5117]: I0130 00:25:03.982716 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7wlw4/must-gather-jksxc"] Jan 30 00:25:04 crc kubenswrapper[5117]: I0130 00:25:04.965541 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7wlw4/must-gather-jksxc" event={"ID":"7eb1ec6a-3800-4471-97ff-ea77433e7cd3","Type":"ContainerStarted","Data":"678ff1a125da085d99d3522354603822f6226e54d2b52b8a612a8f452a12ebe3"} Jan 30 00:25:05 crc kubenswrapper[5117]: E0130 00:25:05.048815 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:25:11 crc kubenswrapper[5117]: I0130 00:25:11.028325 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7wlw4/must-gather-jksxc" event={"ID":"7eb1ec6a-3800-4471-97ff-ea77433e7cd3","Type":"ContainerStarted","Data":"f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f"} Jan 30 00:25:11 crc kubenswrapper[5117]: I0130 00:25:11.029008 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7wlw4/must-gather-jksxc" event={"ID":"7eb1ec6a-3800-4471-97ff-ea77433e7cd3","Type":"ContainerStarted","Data":"3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8"} Jan 30 00:25:11 crc kubenswrapper[5117]: I0130 00:25:11.049711 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7wlw4/must-gather-jksxc" podStartSLOduration=2.113704176 podStartE2EDuration="8.049679772s" podCreationTimestamp="2026-01-30 00:25:03 +0000 UTC" firstStartedPulling="2026-01-30 00:25:03.993140311 +0000 UTC m=+867.104676201" lastFinishedPulling="2026-01-30 00:25:09.929115907 +0000 UTC m=+873.040651797" observedRunningTime="2026-01-30 00:25:11.046957466 +0000 UTC m=+874.158493366" watchObservedRunningTime="2026-01-30 00:25:11.049679772 +0000 UTC m=+874.161215662" Jan 30 00:25:21 crc kubenswrapper[5117]: E0130 00:25:21.053256 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:25:32 crc kubenswrapper[5117]: E0130 00:25:32.038832 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:25:39 crc kubenswrapper[5117]: I0130 00:25:39.337883 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:25:39 crc kubenswrapper[5117]: I0130 00:25:39.339988 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-84dbb4d7c9-7g59k_6e098783-f06a-467c-817d-27e420e206b0/controller-manager/1.log" Jan 30 00:25:39 crc kubenswrapper[5117]: I0130 00:25:39.368022 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sdjgw_c0ccdffb-2e23-428a-8423-b08f9d708b15/kube-multus/0.log" Jan 30 00:25:39 crc kubenswrapper[5117]: I0130 00:25:39.368221 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sdjgw_c0ccdffb-2e23-428a-8423-b08f9d708b15/kube-multus/0.log" Jan 30 00:25:39 crc kubenswrapper[5117]: I0130 00:25:39.373515 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:25:39 crc kubenswrapper[5117]: I0130 00:25:39.373713 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:25:47 crc kubenswrapper[5117]: E0130 00:25:47.042354 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:25:52 crc kubenswrapper[5117]: I0130 00:25:52.272742 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-zzzhq_fc1146e5-d235-43a2-af92-33464c191179/control-plane-machine-set-operator/0.log" Jan 30 00:25:52 crc kubenswrapper[5117]: I0130 00:25:52.409328 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-b52fx_682ed001-72d5-49dd-80bc-a8bb65323efd/kube-rbac-proxy/0.log" Jan 30 00:25:52 crc kubenswrapper[5117]: I0130 00:25:52.468907 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-b52fx_682ed001-72d5-49dd-80bc-a8bb65323efd/machine-api-operator/0.log" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.134520 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495546-7qj2x"] Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.144153 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-7qj2x"] Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.144290 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.146805 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.146937 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.147144 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-f9hbv\"" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.268814 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrt4\" (UniqueName: \"kubernetes.io/projected/e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f-kube-api-access-knrt4\") pod \"auto-csr-approver-29495546-7qj2x\" (UID: \"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f\") " pod="openshift-infra/auto-csr-approver-29495546-7qj2x" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.369603 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knrt4\" (UniqueName: \"kubernetes.io/projected/e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f-kube-api-access-knrt4\") pod \"auto-csr-approver-29495546-7qj2x\" (UID: \"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f\") " pod="openshift-infra/auto-csr-approver-29495546-7qj2x" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.386785 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knrt4\" (UniqueName: \"kubernetes.io/projected/e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f-kube-api-access-knrt4\") pod \"auto-csr-approver-29495546-7qj2x\" (UID: \"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f\") " pod="openshift-infra/auto-csr-approver-29495546-7qj2x" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.470253 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" Jan 30 00:26:00 crc kubenswrapper[5117]: I0130 00:26:00.867808 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-7qj2x"] Jan 30 00:26:01 crc kubenswrapper[5117]: E0130 00:26:01.039576 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:26:01 crc kubenswrapper[5117]: I0130 00:26:01.358037 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" event={"ID":"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f","Type":"ContainerStarted","Data":"fd548d368eeecb70d3afec22497f8a1a414390a103250c839847a2ac1c547cfb"} Jan 30 00:26:03 crc kubenswrapper[5117]: I0130 00:26:03.370212 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" event={"ID":"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f","Type":"ContainerStarted","Data":"0acc09c6f4ecae304e867b3c0d3df385acd19e206e73b50e69fa6178d8be2def"} Jan 30 00:26:03 crc kubenswrapper[5117]: I0130 00:26:03.396219 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" podStartSLOduration=1.287989923 podStartE2EDuration="3.396192518s" podCreationTimestamp="2026-01-30 00:26:00 +0000 UTC" firstStartedPulling="2026-01-30 00:26:00.881243125 +0000 UTC m=+923.992779005" lastFinishedPulling="2026-01-30 00:26:02.98944571 +0000 UTC m=+926.100981600" observedRunningTime="2026-01-30 00:26:03.389080029 +0000 UTC m=+926.500615919" watchObservedRunningTime="2026-01-30 00:26:03.396192518 +0000 UTC m=+926.507728438" Jan 30 00:26:04 crc kubenswrapper[5117]: I0130 00:26:04.376133 5117 generic.go:358] "Generic (PLEG): container finished" podID="e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f" containerID="0acc09c6f4ecae304e867b3c0d3df385acd19e206e73b50e69fa6178d8be2def" exitCode=0 Jan 30 00:26:04 crc kubenswrapper[5117]: I0130 00:26:04.376298 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" event={"ID":"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f","Type":"ContainerDied","Data":"0acc09c6f4ecae304e867b3c0d3df385acd19e206e73b50e69fa6178d8be2def"} Jan 30 00:26:04 crc kubenswrapper[5117]: I0130 00:26:04.476796 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-9r65c_dbfccfb2-32b4-4263-aa31-c85941fc2e0a/cert-manager-controller/0.log" Jan 30 00:26:04 crc kubenswrapper[5117]: I0130 00:26:04.555237 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:26:04 crc kubenswrapper[5117]: I0130 00:26:04.555531 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:04 crc kubenswrapper[5117]: I0130 00:26:04.638235 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-sfwsc_80ebad27-09c5-4752-bbae-2bd38d69f426/cert-manager-cainjector/0.log" Jan 30 00:26:04 crc kubenswrapper[5117]: I0130 00:26:04.704569 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-zp4f2_d9d4730d-04fa-4c8e-a240-3d3a540746d2/cert-manager-webhook/0.log" Jan 30 00:26:05 crc kubenswrapper[5117]: I0130 00:26:05.615540 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" Jan 30 00:26:05 crc kubenswrapper[5117]: I0130 00:26:05.685540 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knrt4\" (UniqueName: \"kubernetes.io/projected/e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f-kube-api-access-knrt4\") pod \"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f\" (UID: \"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f\") " Jan 30 00:26:05 crc kubenswrapper[5117]: I0130 00:26:05.691489 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f-kube-api-access-knrt4" (OuterVolumeSpecName: "kube-api-access-knrt4") pod "e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f" (UID: "e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f"). InnerVolumeSpecName "kube-api-access-knrt4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:26:05 crc kubenswrapper[5117]: I0130 00:26:05.787382 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-knrt4\" (UniqueName: \"kubernetes.io/projected/e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f-kube-api-access-knrt4\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:06 crc kubenswrapper[5117]: I0130 00:26:06.389298 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" Jan 30 00:26:06 crc kubenswrapper[5117]: I0130 00:26:06.389319 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-7qj2x" event={"ID":"e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f","Type":"ContainerDied","Data":"fd548d368eeecb70d3afec22497f8a1a414390a103250c839847a2ac1c547cfb"} Jan 30 00:26:06 crc kubenswrapper[5117]: I0130 00:26:06.389360 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd548d368eeecb70d3afec22497f8a1a414390a103250c839847a2ac1c547cfb" Jan 30 00:26:06 crc kubenswrapper[5117]: I0130 00:26:06.666415 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-vskp9"] Jan 30 00:26:06 crc kubenswrapper[5117]: I0130 00:26:06.672370 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-vskp9"] Jan 30 00:26:07 crc kubenswrapper[5117]: I0130 00:26:07.044518 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c4ecc8-1969-413f-bcd9-07ba11e53d0c" path="/var/lib/kubelet/pods/93c4ecc8-1969-413f-bcd9-07ba11e53d0c/volumes" Jan 30 00:26:13 crc kubenswrapper[5117]: E0130 00:26:13.040108 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:26:19 crc kubenswrapper[5117]: I0130 00:26:19.031396 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-nsdnx_7858922f-a122-4c3c-8e82-2941f771c502/prometheus-operator/0.log" Jan 30 00:26:19 crc kubenswrapper[5117]: I0130 00:26:19.167915 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4_2dad4104-a9d8-45e9-9eae-39a841d6bd14/prometheus-operator-admission-webhook/0.log" Jan 30 00:26:19 crc kubenswrapper[5117]: I0130 00:26:19.213986 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n_16e72675-b6b0-409c-a161-7d1add8eba30/prometheus-operator-admission-webhook/0.log" Jan 30 00:26:19 crc kubenswrapper[5117]: I0130 00:26:19.494046 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-g88kv_2d031867-84f8-4c5b-824a-3be88a288652/operator/0.log" Jan 30 00:26:19 crc kubenswrapper[5117]: I0130 00:26:19.510070 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-xrds8_e3aa9511-b055-404b-b641-6b26327a7ac4/perses-operator/0.log" Jan 30 00:26:24 crc kubenswrapper[5117]: E0130 00:26:24.044583 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:26:32 crc kubenswrapper[5117]: I0130 00:26:32.858810 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l_08d3015b-53e3-4714-a88f-ce216cdbf7db/util/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.019997 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l_08d3015b-53e3-4714-a88f-ce216cdbf7db/util/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.042826 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l_08d3015b-53e3-4714-a88f-ce216cdbf7db/pull/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.054100 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l_08d3015b-53e3-4714-a88f-ce216cdbf7db/pull/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.221951 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l_08d3015b-53e3-4714-a88f-ce216cdbf7db/extract/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.223682 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l_08d3015b-53e3-4714-a88f-ce216cdbf7db/pull/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.242410 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftpl8l_08d3015b-53e3-4714-a88f-ce216cdbf7db/util/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.385423 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_e0791d08-fb28-4fed-9fc1-f4a1c7d8c077/util/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.590231 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_e0791d08-fb28-4fed-9fc1-f4a1c7d8c077/util/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.799550 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_e0791d08-fb28-4fed-9fc1-f4a1c7d8c077/util/0.log" Jan 30 00:26:33 crc kubenswrapper[5117]: I0130 00:26:33.950521 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76_df52d557-84e5-4c20-85f4-751779ecdeff/util/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.132816 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76_df52d557-84e5-4c20-85f4-751779ecdeff/util/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.153977 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76_df52d557-84e5-4c20-85f4-751779ecdeff/pull/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.155662 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76_df52d557-84e5-4c20-85f4-751779ecdeff/pull/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.288978 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76_df52d557-84e5-4c20-85f4-751779ecdeff/util/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.323610 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76_df52d557-84e5-4c20-85f4-751779ecdeff/extract/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.349655 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p5n76_df52d557-84e5-4c20-85f4-751779ecdeff/pull/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.468251 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr_a0ee5f51-4db4-4713-bd3a-850996fcb555/util/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.554883 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.554962 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.661223 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr_a0ee5f51-4db4-4713-bd3a-850996fcb555/util/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.669536 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr_a0ee5f51-4db4-4713-bd3a-850996fcb555/pull/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.679856 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr_a0ee5f51-4db4-4713-bd3a-850996fcb555/pull/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.869239 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr_a0ee5f51-4db4-4713-bd3a-850996fcb555/util/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.875734 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr_a0ee5f51-4db4-4713-bd3a-850996fcb555/pull/0.log" Jan 30 00:26:34 crc kubenswrapper[5117]: I0130 00:26:34.882845 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0894vkr_a0ee5f51-4db4-4713-bd3a-850996fcb555/extract/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.066296 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfchs_4da906ef-6bfe-4595-b492-fc192b73118e/extract-utilities/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.209729 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfchs_4da906ef-6bfe-4595-b492-fc192b73118e/extract-content/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.231376 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfchs_4da906ef-6bfe-4595-b492-fc192b73118e/extract-utilities/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.282768 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfchs_4da906ef-6bfe-4595-b492-fc192b73118e/extract-content/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.444645 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfchs_4da906ef-6bfe-4595-b492-fc192b73118e/extract-utilities/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.497986 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfchs_4da906ef-6bfe-4595-b492-fc192b73118e/extract-content/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.517631 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfchs_4da906ef-6bfe-4595-b492-fc192b73118e/registry-server/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.560831 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d7bs_841f2982-7c20-4202-a7ca-633883c148b2/extract-utilities/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.770559 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d7bs_841f2982-7c20-4202-a7ca-633883c148b2/extract-utilities/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.780062 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d7bs_841f2982-7c20-4202-a7ca-633883c148b2/extract-content/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.805015 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d7bs_841f2982-7c20-4202-a7ca-633883c148b2/extract-content/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.994335 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d7bs_841f2982-7c20-4202-a7ca-633883c148b2/extract-content/0.log" Jan 30 00:26:35 crc kubenswrapper[5117]: I0130 00:26:35.998540 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d7bs_841f2982-7c20-4202-a7ca-633883c148b2/extract-utilities/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.015434 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-rzwxb_59b8040b-1d85-49e5-8969-3d1fe83b360e/marketplace-operator/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.191977 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7m42x_00c7c764-9b8c-4146-a659-38621c5e3c35/extract-utilities/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.241475 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6d7bs_841f2982-7c20-4202-a7ca-633883c148b2/registry-server/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.395469 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7m42x_00c7c764-9b8c-4146-a659-38621c5e3c35/extract-content/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.398165 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7m42x_00c7c764-9b8c-4146-a659-38621c5e3c35/extract-utilities/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.433523 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7m42x_00c7c764-9b8c-4146-a659-38621c5e3c35/extract-content/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.583567 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7m42x_00c7c764-9b8c-4146-a659-38621c5e3c35/extract-utilities/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.603140 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7m42x_00c7c764-9b8c-4146-a659-38621c5e3c35/extract-content/0.log" Jan 30 00:26:36 crc kubenswrapper[5117]: I0130 00:26:36.694541 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7m42x_00c7c764-9b8c-4146-a659-38621c5e3c35/registry-server/0.log" Jan 30 00:26:39 crc kubenswrapper[5117]: E0130 00:26:39.045914 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:26:39 crc kubenswrapper[5117]: I0130 00:26:39.865412 5117 scope.go:117] "RemoveContainer" containerID="51cc98933531c2f052fb0b9df8b8f898d3c8d27da7a0ecb0173330297645ae0a" Jan 30 00:26:49 crc kubenswrapper[5117]: I0130 00:26:49.114667 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6686d56dd5-mlcf4_2dad4104-a9d8-45e9-9eae-39a841d6bd14/prometheus-operator-admission-webhook/0.log" Jan 30 00:26:49 crc kubenswrapper[5117]: I0130 00:26:49.144457 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-nsdnx_7858922f-a122-4c3c-8e82-2941f771c502/prometheus-operator/0.log" Jan 30 00:26:49 crc kubenswrapper[5117]: I0130 00:26:49.183223 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6686d56dd5-vt85n_16e72675-b6b0-409c-a161-7d1add8eba30/prometheus-operator-admission-webhook/0.log" Jan 30 00:26:49 crc kubenswrapper[5117]: I0130 00:26:49.278052 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-g88kv_2d031867-84f8-4c5b-824a-3be88a288652/operator/0.log" Jan 30 00:26:49 crc kubenswrapper[5117]: I0130 00:26:49.304029 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-xrds8_e3aa9511-b055-404b-b641-6b26327a7ac4/perses-operator/0.log" Jan 30 00:26:52 crc kubenswrapper[5117]: I0130 00:26:52.038468 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:26:52 crc kubenswrapper[5117]: E0130 00:26:52.038682 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.555832 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.556576 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.556999 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.558085 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54d3a6365c99493f08f59c805da853cdb6dce1209ccd8d5d1aa4a59d4a29f37d"} pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.558203 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" containerID="cri-o://54d3a6365c99493f08f59c805da853cdb6dce1209ccd8d5d1aa4a59d4a29f37d" gracePeriod=600 Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.770438 5117 generic.go:358] "Generic (PLEG): container finished" podID="3965caad-c581-45b3-88e0-99b4039659c5" containerID="54d3a6365c99493f08f59c805da853cdb6dce1209ccd8d5d1aa4a59d4a29f37d" exitCode=0 Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.770557 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerDied","Data":"54d3a6365c99493f08f59c805da853cdb6dce1209ccd8d5d1aa4a59d4a29f37d"} Jan 30 00:27:04 crc kubenswrapper[5117]: I0130 00:27:04.770636 5117 scope.go:117] "RemoveContainer" containerID="073defdd7077d53303dbf34291dabf3d999fa2598157f63385c19d2858c64243" Jan 30 00:27:05 crc kubenswrapper[5117]: I0130 00:27:05.781448 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"60a9c372470eb41b75bcddd022584d6a399535df97675e40a53392e99465c497"} Jan 30 00:27:07 crc kubenswrapper[5117]: E0130 00:27:07.042743 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:27:18 crc kubenswrapper[5117]: E0130 00:27:18.042354 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:27:30 crc kubenswrapper[5117]: E0130 00:27:30.273432 5117 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:27:30 crc kubenswrapper[5117]: E0130 00:27:30.274113 5117 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nlg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h_openshift-marketplace(e0791d08-fb28-4fed-9fc1-f4a1c7d8c077): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:27:30 crc kubenswrapper[5117]: E0130 00:27:30.276078 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:27:34 crc kubenswrapper[5117]: I0130 00:27:34.003481 5117 generic.go:358] "Generic (PLEG): container finished" podID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerID="3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8" exitCode=0 Jan 30 00:27:34 crc kubenswrapper[5117]: I0130 00:27:34.003585 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7wlw4/must-gather-jksxc" event={"ID":"7eb1ec6a-3800-4471-97ff-ea77433e7cd3","Type":"ContainerDied","Data":"3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8"} Jan 30 00:27:34 crc kubenswrapper[5117]: I0130 00:27:34.004418 5117 scope.go:117] "RemoveContainer" containerID="3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8" Jan 30 00:27:34 crc kubenswrapper[5117]: I0130 00:27:34.916598 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7wlw4_must-gather-jksxc_7eb1ec6a-3800-4471-97ff-ea77433e7cd3/gather/0.log" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.086445 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7wlw4/must-gather-jksxc"] Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.087374 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-7wlw4/must-gather-jksxc" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerName="copy" containerID="cri-o://f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f" gracePeriod=2 Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.090956 5117 status_manager.go:895] "Failed to get status for pod" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" pod="openshift-must-gather-7wlw4/must-gather-jksxc" err="pods \"must-gather-jksxc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7wlw4\": no relationship found between node 'crc' and this object" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.092786 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7wlw4/must-gather-jksxc"] Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.465809 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7wlw4_must-gather-jksxc_7eb1ec6a-3800-4471-97ff-ea77433e7cd3/copy/0.log" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.466634 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.468406 5117 status_manager.go:895] "Failed to get status for pod" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" pod="openshift-must-gather-7wlw4/must-gather-jksxc" err="pods \"must-gather-jksxc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7wlw4\": no relationship found between node 'crc' and this object" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.517707 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-must-gather-output\") pod \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.517845 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5rnb\" (UniqueName: \"kubernetes.io/projected/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-kube-api-access-p5rnb\") pod \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\" (UID: \"7eb1ec6a-3800-4471-97ff-ea77433e7cd3\") " Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.525582 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-kube-api-access-p5rnb" (OuterVolumeSpecName: "kube-api-access-p5rnb") pod "7eb1ec6a-3800-4471-97ff-ea77433e7cd3" (UID: "7eb1ec6a-3800-4471-97ff-ea77433e7cd3"). InnerVolumeSpecName "kube-api-access-p5rnb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.576455 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7eb1ec6a-3800-4471-97ff-ea77433e7cd3" (UID: "7eb1ec6a-3800-4471-97ff-ea77433e7cd3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.619552 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p5rnb\" (UniqueName: \"kubernetes.io/projected/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-kube-api-access-p5rnb\") on node \"crc\" DevicePath \"\"" Jan 30 00:27:41 crc kubenswrapper[5117]: I0130 00:27:41.619582 5117 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7eb1ec6a-3800-4471-97ff-ea77433e7cd3-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.063488 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7wlw4_must-gather-jksxc_7eb1ec6a-3800-4471-97ff-ea77433e7cd3/copy/0.log" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.063778 5117 generic.go:358] "Generic (PLEG): container finished" podID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerID="f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f" exitCode=143 Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.063842 5117 scope.go:117] "RemoveContainer" containerID="f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.063936 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7wlw4/must-gather-jksxc" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.067711 5117 status_manager.go:895] "Failed to get status for pod" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" pod="openshift-must-gather-7wlw4/must-gather-jksxc" err="pods \"must-gather-jksxc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7wlw4\": no relationship found between node 'crc' and this object" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.085308 5117 status_manager.go:895] "Failed to get status for pod" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" pod="openshift-must-gather-7wlw4/must-gather-jksxc" err="pods \"must-gather-jksxc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-7wlw4\": no relationship found between node 'crc' and this object" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.087015 5117 scope.go:117] "RemoveContainer" containerID="3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.171535 5117 scope.go:117] "RemoveContainer" containerID="f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f" Jan 30 00:27:42 crc kubenswrapper[5117]: E0130 00:27:42.172022 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f\": container with ID starting with f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f not found: ID does not exist" containerID="f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.172054 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f"} err="failed to get container status \"f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f\": rpc error: code = NotFound desc = could not find container \"f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f\": container with ID starting with f286f9ee0ead5b04802bf7ac9263ddcc5e994122ee4a80977c94271bd722299f not found: ID does not exist" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.172072 5117 scope.go:117] "RemoveContainer" containerID="3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8" Jan 30 00:27:42 crc kubenswrapper[5117]: E0130 00:27:42.172401 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8\": container with ID starting with 3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8 not found: ID does not exist" containerID="3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8" Jan 30 00:27:42 crc kubenswrapper[5117]: I0130 00:27:42.172415 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8"} err="failed to get container status \"3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8\": rpc error: code = NotFound desc = could not find container \"3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8\": container with ID starting with 3a7de3494c59e4010c9e6cc39b0255d9cbeeee368c76d2fe2b20325cb4749ab8 not found: ID does not exist" Jan 30 00:27:43 crc kubenswrapper[5117]: I0130 00:27:43.049298 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" path="/var/lib/kubelet/pods/7eb1ec6a-3800-4471-97ff-ea77433e7cd3/volumes" Jan 30 00:27:45 crc kubenswrapper[5117]: E0130 00:27:45.041331 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:28:00 crc kubenswrapper[5117]: E0130 00:28:00.041943 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.145749 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495548-6xwwn"] Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147158 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147194 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147239 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerName="gather" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147251 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerName="gather" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147286 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerName="copy" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147297 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerName="copy" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147496 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerName="gather" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147528 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7eb1ec6a-3800-4471-97ff-ea77433e7cd3" containerName="copy" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.147552 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="e914f4b4-5b56-47f8-9b7b-2a9bf1ca550f" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.157221 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-6xwwn"] Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.157370 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-6xwwn" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.159905 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-f9hbv\"" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.160202 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.160813 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.224646 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqxpw\" (UniqueName: \"kubernetes.io/projected/d6d60661-c36b-4685-93a3-a6d5782d6b7a-kube-api-access-vqxpw\") pod \"auto-csr-approver-29495548-6xwwn\" (UID: \"d6d60661-c36b-4685-93a3-a6d5782d6b7a\") " pod="openshift-infra/auto-csr-approver-29495548-6xwwn" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.325457 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vqxpw\" (UniqueName: \"kubernetes.io/projected/d6d60661-c36b-4685-93a3-a6d5782d6b7a-kube-api-access-vqxpw\") pod \"auto-csr-approver-29495548-6xwwn\" (UID: \"d6d60661-c36b-4685-93a3-a6d5782d6b7a\") " pod="openshift-infra/auto-csr-approver-29495548-6xwwn" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.360185 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqxpw\" (UniqueName: \"kubernetes.io/projected/d6d60661-c36b-4685-93a3-a6d5782d6b7a-kube-api-access-vqxpw\") pod \"auto-csr-approver-29495548-6xwwn\" (UID: \"d6d60661-c36b-4685-93a3-a6d5782d6b7a\") " pod="openshift-infra/auto-csr-approver-29495548-6xwwn" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.486767 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-6xwwn" Jan 30 00:28:00 crc kubenswrapper[5117]: I0130 00:28:00.714516 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-6xwwn"] Jan 30 00:28:01 crc kubenswrapper[5117]: I0130 00:28:01.218265 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-6xwwn" event={"ID":"d6d60661-c36b-4685-93a3-a6d5782d6b7a","Type":"ContainerStarted","Data":"92006b4bf542f79ec4a9821c9efb9c5936ed08d660ed9d43c3ca59568eb824c5"} Jan 30 00:28:02 crc kubenswrapper[5117]: I0130 00:28:02.228456 5117 generic.go:358] "Generic (PLEG): container finished" podID="d6d60661-c36b-4685-93a3-a6d5782d6b7a" containerID="00df370c4538d4e428c9c3015c389b6743301582674ac5a96fe258fe3ec272f6" exitCode=0 Jan 30 00:28:02 crc kubenswrapper[5117]: I0130 00:28:02.228528 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-6xwwn" event={"ID":"d6d60661-c36b-4685-93a3-a6d5782d6b7a","Type":"ContainerDied","Data":"00df370c4538d4e428c9c3015c389b6743301582674ac5a96fe258fe3ec272f6"} Jan 30 00:28:03 crc kubenswrapper[5117]: I0130 00:28:03.527436 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-6xwwn" Jan 30 00:28:03 crc kubenswrapper[5117]: I0130 00:28:03.672340 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqxpw\" (UniqueName: \"kubernetes.io/projected/d6d60661-c36b-4685-93a3-a6d5782d6b7a-kube-api-access-vqxpw\") pod \"d6d60661-c36b-4685-93a3-a6d5782d6b7a\" (UID: \"d6d60661-c36b-4685-93a3-a6d5782d6b7a\") " Jan 30 00:28:03 crc kubenswrapper[5117]: I0130 00:28:03.682551 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d60661-c36b-4685-93a3-a6d5782d6b7a-kube-api-access-vqxpw" (OuterVolumeSpecName: "kube-api-access-vqxpw") pod "d6d60661-c36b-4685-93a3-a6d5782d6b7a" (UID: "d6d60661-c36b-4685-93a3-a6d5782d6b7a"). InnerVolumeSpecName "kube-api-access-vqxpw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:28:03 crc kubenswrapper[5117]: I0130 00:28:03.773878 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vqxpw\" (UniqueName: \"kubernetes.io/projected/d6d60661-c36b-4685-93a3-a6d5782d6b7a-kube-api-access-vqxpw\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:04 crc kubenswrapper[5117]: I0130 00:28:04.247863 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-6xwwn" event={"ID":"d6d60661-c36b-4685-93a3-a6d5782d6b7a","Type":"ContainerDied","Data":"92006b4bf542f79ec4a9821c9efb9c5936ed08d660ed9d43c3ca59568eb824c5"} Jan 30 00:28:04 crc kubenswrapper[5117]: I0130 00:28:04.248138 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92006b4bf542f79ec4a9821c9efb9c5936ed08d660ed9d43c3ca59568eb824c5" Jan 30 00:28:04 crc kubenswrapper[5117]: I0130 00:28:04.247996 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-6xwwn" Jan 30 00:28:04 crc kubenswrapper[5117]: I0130 00:28:04.610834 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-wzq4f"] Jan 30 00:28:04 crc kubenswrapper[5117]: I0130 00:28:04.616928 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-wzq4f"] Jan 30 00:28:05 crc kubenswrapper[5117]: I0130 00:28:05.050321 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64980233-03c4-482d-bf2d-1bb9e9bc6614" path="/var/lib/kubelet/pods/64980233-03c4-482d-bf2d-1bb9e9bc6614/volumes" Jan 30 00:28:14 crc kubenswrapper[5117]: E0130 00:28:14.041475 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.302586 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68j9n"] Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.304362 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6d60661-c36b-4685-93a3-a6d5782d6b7a" containerName="oc" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.304399 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d60661-c36b-4685-93a3-a6d5782d6b7a" containerName="oc" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.304926 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d6d60661-c36b-4685-93a3-a6d5782d6b7a" containerName="oc" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.316045 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.326643 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68j9n"] Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.423433 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-utilities\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.423646 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z74k\" (UniqueName: \"kubernetes.io/projected/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-kube-api-access-9z74k\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.423740 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-catalog-content\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.524566 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-utilities\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.524864 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9z74k\" (UniqueName: \"kubernetes.io/projected/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-kube-api-access-9z74k\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.524907 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-catalog-content\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.525672 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-catalog-content\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.526289 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-utilities\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.554884 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z74k\" (UniqueName: \"kubernetes.io/projected/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-kube-api-access-9z74k\") pod \"redhat-operators-68j9n\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.645277 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:19 crc kubenswrapper[5117]: I0130 00:28:19.904329 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68j9n"] Jan 30 00:28:20 crc kubenswrapper[5117]: I0130 00:28:20.366134 5117 generic.go:358] "Generic (PLEG): container finished" podID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerID="5a3cdea2fc203e5842b46cc7ab6435b80eefc37e206966200fc9c00211b38da5" exitCode=0 Jan 30 00:28:20 crc kubenswrapper[5117]: I0130 00:28:20.366416 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68j9n" event={"ID":"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5","Type":"ContainerDied","Data":"5a3cdea2fc203e5842b46cc7ab6435b80eefc37e206966200fc9c00211b38da5"} Jan 30 00:28:20 crc kubenswrapper[5117]: I0130 00:28:20.366455 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68j9n" event={"ID":"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5","Type":"ContainerStarted","Data":"4b5c7a992d3326313bf28d938b172efc1140140f6eacc91498b6e2166f5fd912"} Jan 30 00:28:21 crc kubenswrapper[5117]: I0130 00:28:21.374651 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68j9n" event={"ID":"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5","Type":"ContainerStarted","Data":"5917277b3f1859a3aa4e4ea4ba9a63ffb13722836afadb749781121c2fb88eaa"} Jan 30 00:28:22 crc kubenswrapper[5117]: I0130 00:28:22.382710 5117 generic.go:358] "Generic (PLEG): container finished" podID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerID="5917277b3f1859a3aa4e4ea4ba9a63ffb13722836afadb749781121c2fb88eaa" exitCode=0 Jan 30 00:28:22 crc kubenswrapper[5117]: I0130 00:28:22.382811 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68j9n" event={"ID":"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5","Type":"ContainerDied","Data":"5917277b3f1859a3aa4e4ea4ba9a63ffb13722836afadb749781121c2fb88eaa"} Jan 30 00:28:23 crc kubenswrapper[5117]: I0130 00:28:23.393018 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68j9n" event={"ID":"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5","Type":"ContainerStarted","Data":"d2e813a068a5e4a925cedefa85d37e19789ec2536046aa03ccec5162a167cad2"} Jan 30 00:28:23 crc kubenswrapper[5117]: I0130 00:28:23.429094 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68j9n" podStartSLOduration=3.84314854 podStartE2EDuration="4.429067189s" podCreationTimestamp="2026-01-30 00:28:19 +0000 UTC" firstStartedPulling="2026-01-30 00:28:20.366869161 +0000 UTC m=+1063.478405051" lastFinishedPulling="2026-01-30 00:28:20.95278778 +0000 UTC m=+1064.064323700" observedRunningTime="2026-01-30 00:28:23.420342785 +0000 UTC m=+1066.531878725" watchObservedRunningTime="2026-01-30 00:28:23.429067189 +0000 UTC m=+1066.540603119" Jan 30 00:28:25 crc kubenswrapper[5117]: E0130 00:28:25.038897 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:28:29 crc kubenswrapper[5117]: I0130 00:28:29.646812 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:29 crc kubenswrapper[5117]: I0130 00:28:29.647396 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:29 crc kubenswrapper[5117]: I0130 00:28:29.707481 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:30 crc kubenswrapper[5117]: I0130 00:28:30.482518 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:30 crc kubenswrapper[5117]: I0130 00:28:30.538474 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68j9n"] Jan 30 00:28:32 crc kubenswrapper[5117]: I0130 00:28:32.450628 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68j9n" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="registry-server" containerID="cri-o://d2e813a068a5e4a925cedefa85d37e19789ec2536046aa03ccec5162a167cad2" gracePeriod=2 Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.460553 5117 generic.go:358] "Generic (PLEG): container finished" podID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerID="d2e813a068a5e4a925cedefa85d37e19789ec2536046aa03ccec5162a167cad2" exitCode=0 Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.460799 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68j9n" event={"ID":"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5","Type":"ContainerDied","Data":"d2e813a068a5e4a925cedefa85d37e19789ec2536046aa03ccec5162a167cad2"} Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.577713 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.724978 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-catalog-content\") pod \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.725027 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z74k\" (UniqueName: \"kubernetes.io/projected/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-kube-api-access-9z74k\") pod \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.725105 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-utilities\") pod \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\" (UID: \"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5\") " Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.726208 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-utilities" (OuterVolumeSpecName: "utilities") pod "500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" (UID: "500d9f3c-db8c-49fa-a4fd-c0fc28d884c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.730368 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-kube-api-access-9z74k" (OuterVolumeSpecName: "kube-api-access-9z74k") pod "500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" (UID: "500d9f3c-db8c-49fa-a4fd-c0fc28d884c5"). InnerVolumeSpecName "kube-api-access-9z74k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.826811 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z74k\" (UniqueName: \"kubernetes.io/projected/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-kube-api-access-9z74k\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.826851 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.830040 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" (UID: "500d9f3c-db8c-49fa-a4fd-c0fc28d884c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:28:33 crc kubenswrapper[5117]: I0130 00:28:33.928426 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:34 crc kubenswrapper[5117]: I0130 00:28:34.469957 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68j9n" event={"ID":"500d9f3c-db8c-49fa-a4fd-c0fc28d884c5","Type":"ContainerDied","Data":"4b5c7a992d3326313bf28d938b172efc1140140f6eacc91498b6e2166f5fd912"} Jan 30 00:28:34 crc kubenswrapper[5117]: I0130 00:28:34.470013 5117 scope.go:117] "RemoveContainer" containerID="d2e813a068a5e4a925cedefa85d37e19789ec2536046aa03ccec5162a167cad2" Jan 30 00:28:34 crc kubenswrapper[5117]: I0130 00:28:34.470057 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68j9n" Jan 30 00:28:34 crc kubenswrapper[5117]: I0130 00:28:34.493310 5117 scope.go:117] "RemoveContainer" containerID="5917277b3f1859a3aa4e4ea4ba9a63ffb13722836afadb749781121c2fb88eaa" Jan 30 00:28:34 crc kubenswrapper[5117]: I0130 00:28:34.506376 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68j9n"] Jan 30 00:28:34 crc kubenswrapper[5117]: I0130 00:28:34.511203 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68j9n"] Jan 30 00:28:34 crc kubenswrapper[5117]: I0130 00:28:34.533714 5117 scope.go:117] "RemoveContainer" containerID="5a3cdea2fc203e5842b46cc7ab6435b80eefc37e206966200fc9c00211b38da5" Jan 30 00:28:35 crc kubenswrapper[5117]: I0130 00:28:35.045403 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" path="/var/lib/kubelet/pods/500d9f3c-db8c-49fa-a4fd-c0fc28d884c5/volumes" Jan 30 00:28:39 crc kubenswrapper[5117]: E0130 00:28:39.041195 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:28:39 crc kubenswrapper[5117]: I0130 00:28:39.988110 5117 scope.go:117] "RemoveContainer" containerID="07cc440485a45988bcf62dee2e6ddfcf006421300ab04f7f12c2b358b746fcac" Jan 30 00:28:54 crc kubenswrapper[5117]: E0130 00:28:54.040194 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:29:04 crc kubenswrapper[5117]: I0130 00:29:04.554957 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:04 crc kubenswrapper[5117]: I0130 00:29:04.555420 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:06 crc kubenswrapper[5117]: E0130 00:29:06.040562 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:29:19 crc kubenswrapper[5117]: E0130 00:29:19.043721 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:29:30 crc kubenswrapper[5117]: E0130 00:29:30.039716 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:29:34 crc kubenswrapper[5117]: I0130 00:29:34.555170 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:34 crc kubenswrapper[5117]: I0130 00:29:34.555254 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:44 crc kubenswrapper[5117]: E0130 00:29:44.040314 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:29:55 crc kubenswrapper[5117]: E0130 00:29:55.039324 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.145571 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck"] Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.147088 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="registry-server" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.147129 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="registry-server" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.147189 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="extract-content" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.147201 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="extract-content" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.147231 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="extract-utilities" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.147243 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="extract-utilities" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.147464 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="500d9f3c-db8c-49fa-a4fd-c0fc28d884c5" containerName="registry-server" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.161993 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495550-hnvfc"] Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.163144 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.168411 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.169189 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-hnvfc" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.170015 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.172390 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-f9hbv\"" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.172911 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.172714 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.178131 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-hnvfc"] Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.192097 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck"] Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.262873 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtsv5\" (UniqueName: \"kubernetes.io/projected/36323d2b-515a-4c55-8bc5-6946fe36d44e-kube-api-access-gtsv5\") pod \"auto-csr-approver-29495550-hnvfc\" (UID: \"36323d2b-515a-4c55-8bc5-6946fe36d44e\") " pod="openshift-infra/auto-csr-approver-29495550-hnvfc" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.263169 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bfc748d-070f-40ea-9b88-188a69ffc691-secret-volume\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.263348 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bfc748d-070f-40ea-9b88-188a69ffc691-config-volume\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.263385 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4hf6\" (UniqueName: \"kubernetes.io/projected/5bfc748d-070f-40ea-9b88-188a69ffc691-kube-api-access-b4hf6\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.365002 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bfc748d-070f-40ea-9b88-188a69ffc691-secret-volume\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.365082 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bfc748d-070f-40ea-9b88-188a69ffc691-config-volume\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.365103 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b4hf6\" (UniqueName: \"kubernetes.io/projected/5bfc748d-070f-40ea-9b88-188a69ffc691-kube-api-access-b4hf6\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.365131 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gtsv5\" (UniqueName: \"kubernetes.io/projected/36323d2b-515a-4c55-8bc5-6946fe36d44e-kube-api-access-gtsv5\") pod \"auto-csr-approver-29495550-hnvfc\" (UID: \"36323d2b-515a-4c55-8bc5-6946fe36d44e\") " pod="openshift-infra/auto-csr-approver-29495550-hnvfc" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.366184 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bfc748d-070f-40ea-9b88-188a69ffc691-config-volume\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.386662 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bfc748d-070f-40ea-9b88-188a69ffc691-secret-volume\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.389628 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtsv5\" (UniqueName: \"kubernetes.io/projected/36323d2b-515a-4c55-8bc5-6946fe36d44e-kube-api-access-gtsv5\") pod \"auto-csr-approver-29495550-hnvfc\" (UID: \"36323d2b-515a-4c55-8bc5-6946fe36d44e\") " pod="openshift-infra/auto-csr-approver-29495550-hnvfc" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.393080 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4hf6\" (UniqueName: \"kubernetes.io/projected/5bfc748d-070f-40ea-9b88-188a69ffc691-kube-api-access-b4hf6\") pod \"collect-profiles-29495550-mzlck\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.491275 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.499907 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-hnvfc" Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.732423 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck"] Jan 30 00:30:00 crc kubenswrapper[5117]: I0130 00:30:00.766567 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-hnvfc"] Jan 30 00:30:01 crc kubenswrapper[5117]: I0130 00:30:01.176984 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-hnvfc" event={"ID":"36323d2b-515a-4c55-8bc5-6946fe36d44e","Type":"ContainerStarted","Data":"0e85c77c7d77efbcc3bb692332ab6d826ffd0ad3bffc0f2e3305f7bfa37bd705"} Jan 30 00:30:01 crc kubenswrapper[5117]: I0130 00:30:01.179264 5117 generic.go:358] "Generic (PLEG): container finished" podID="5bfc748d-070f-40ea-9b88-188a69ffc691" containerID="15903816a47a3a7e31a37f29fa187a229f320326c776e0bdd2a013866a652074" exitCode=0 Jan 30 00:30:01 crc kubenswrapper[5117]: I0130 00:30:01.179348 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" event={"ID":"5bfc748d-070f-40ea-9b88-188a69ffc691","Type":"ContainerDied","Data":"15903816a47a3a7e31a37f29fa187a229f320326c776e0bdd2a013866a652074"} Jan 30 00:30:01 crc kubenswrapper[5117]: I0130 00:30:01.179374 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" event={"ID":"5bfc748d-070f-40ea-9b88-188a69ffc691","Type":"ContainerStarted","Data":"3ea225f453bd4b1503c1f695cb3a3b6eaeaf282edf2b0c29c6ed96b9023f23d6"} Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.440158 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.596246 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bfc748d-070f-40ea-9b88-188a69ffc691-secret-volume\") pod \"5bfc748d-070f-40ea-9b88-188a69ffc691\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.596300 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bfc748d-070f-40ea-9b88-188a69ffc691-config-volume\") pod \"5bfc748d-070f-40ea-9b88-188a69ffc691\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.596322 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4hf6\" (UniqueName: \"kubernetes.io/projected/5bfc748d-070f-40ea-9b88-188a69ffc691-kube-api-access-b4hf6\") pod \"5bfc748d-070f-40ea-9b88-188a69ffc691\" (UID: \"5bfc748d-070f-40ea-9b88-188a69ffc691\") " Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.597371 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfc748d-070f-40ea-9b88-188a69ffc691-config-volume" (OuterVolumeSpecName: "config-volume") pod "5bfc748d-070f-40ea-9b88-188a69ffc691" (UID: "5bfc748d-070f-40ea-9b88-188a69ffc691"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.602185 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfc748d-070f-40ea-9b88-188a69ffc691-kube-api-access-b4hf6" (OuterVolumeSpecName: "kube-api-access-b4hf6") pod "5bfc748d-070f-40ea-9b88-188a69ffc691" (UID: "5bfc748d-070f-40ea-9b88-188a69ffc691"). InnerVolumeSpecName "kube-api-access-b4hf6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.602561 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfc748d-070f-40ea-9b88-188a69ffc691-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5bfc748d-070f-40ea-9b88-188a69ffc691" (UID: "5bfc748d-070f-40ea-9b88-188a69ffc691"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.697651 5117 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bfc748d-070f-40ea-9b88-188a69ffc691-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.697680 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bfc748d-070f-40ea-9b88-188a69ffc691-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:02 crc kubenswrapper[5117]: I0130 00:30:02.697710 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4hf6\" (UniqueName: \"kubernetes.io/projected/5bfc748d-070f-40ea-9b88-188a69ffc691-kube-api-access-b4hf6\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5117]: I0130 00:30:03.193333 5117 generic.go:358] "Generic (PLEG): container finished" podID="36323d2b-515a-4c55-8bc5-6946fe36d44e" containerID="94462ae016ea7e9fd97b12bb28be2498ebb6eba99b0058792863ae032d22c0bb" exitCode=0 Jan 30 00:30:03 crc kubenswrapper[5117]: I0130 00:30:03.193866 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-hnvfc" event={"ID":"36323d2b-515a-4c55-8bc5-6946fe36d44e","Type":"ContainerDied","Data":"94462ae016ea7e9fd97b12bb28be2498ebb6eba99b0058792863ae032d22c0bb"} Jan 30 00:30:03 crc kubenswrapper[5117]: I0130 00:30:03.196072 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" event={"ID":"5bfc748d-070f-40ea-9b88-188a69ffc691","Type":"ContainerDied","Data":"3ea225f453bd4b1503c1f695cb3a3b6eaeaf282edf2b0c29c6ed96b9023f23d6"} Jan 30 00:30:03 crc kubenswrapper[5117]: I0130 00:30:03.196169 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ea225f453bd4b1503c1f695cb3a3b6eaeaf282edf2b0c29c6ed96b9023f23d6" Jan 30 00:30:03 crc kubenswrapper[5117]: I0130 00:30:03.196232 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-mzlck" Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.517230 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-hnvfc" Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.556270 5117 patch_prober.go:28] interesting pod/machine-config-daemon-z8qm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.556688 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.557081 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.558280 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60a9c372470eb41b75bcddd022584d6a399535df97675e40a53392e99465c497"} pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.558560 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" podUID="3965caad-c581-45b3-88e0-99b4039659c5" containerName="machine-config-daemon" containerID="cri-o://60a9c372470eb41b75bcddd022584d6a399535df97675e40a53392e99465c497" gracePeriod=600 Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.625816 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtsv5\" (UniqueName: \"kubernetes.io/projected/36323d2b-515a-4c55-8bc5-6946fe36d44e-kube-api-access-gtsv5\") pod \"36323d2b-515a-4c55-8bc5-6946fe36d44e\" (UID: \"36323d2b-515a-4c55-8bc5-6946fe36d44e\") " Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.631970 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36323d2b-515a-4c55-8bc5-6946fe36d44e-kube-api-access-gtsv5" (OuterVolumeSpecName: "kube-api-access-gtsv5") pod "36323d2b-515a-4c55-8bc5-6946fe36d44e" (UID: "36323d2b-515a-4c55-8bc5-6946fe36d44e"). InnerVolumeSpecName "kube-api-access-gtsv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:04 crc kubenswrapper[5117]: I0130 00:30:04.728235 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gtsv5\" (UniqueName: \"kubernetes.io/projected/36323d2b-515a-4c55-8bc5-6946fe36d44e-kube-api-access-gtsv5\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.209935 5117 generic.go:358] "Generic (PLEG): container finished" podID="3965caad-c581-45b3-88e0-99b4039659c5" containerID="60a9c372470eb41b75bcddd022584d6a399535df97675e40a53392e99465c497" exitCode=0 Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.209997 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerDied","Data":"60a9c372470eb41b75bcddd022584d6a399535df97675e40a53392e99465c497"} Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.210029 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z8qm4" event={"ID":"3965caad-c581-45b3-88e0-99b4039659c5","Type":"ContainerStarted","Data":"52e414652dc52f72fb923db383576935eef611cc63671623c87a642710679880"} Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.210045 5117 scope.go:117] "RemoveContainer" containerID="54d3a6365c99493f08f59c805da853cdb6dce1209ccd8d5d1aa4a59d4a29f37d" Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.213367 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-hnvfc" event={"ID":"36323d2b-515a-4c55-8bc5-6946fe36d44e","Type":"ContainerDied","Data":"0e85c77c7d77efbcc3bb692332ab6d826ffd0ad3bffc0f2e3305f7bfa37bd705"} Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.213385 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-hnvfc" Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.213391 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e85c77c7d77efbcc3bb692332ab6d826ffd0ad3bffc0f2e3305f7bfa37bd705" Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.569557 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kjdfn"] Jan 30 00:30:05 crc kubenswrapper[5117]: I0130 00:30:05.573288 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kjdfn"] Jan 30 00:30:07 crc kubenswrapper[5117]: E0130 00:30:07.039352 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:30:07 crc kubenswrapper[5117]: I0130 00:30:07.043339 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b" path="/var/lib/kubelet/pods/0f1d026f-8ed9-4f1d-be42-e27ea53a5f2b/volumes" Jan 30 00:30:18 crc kubenswrapper[5117]: E0130 00:30:18.049371 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" Jan 30 00:30:33 crc kubenswrapper[5117]: E0130 00:30:33.051817 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ehl42h" podUID="e0791d08-fb28-4fed-9fc1-f4a1c7d8c077" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136775666024473 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136775667017411 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136772764016527 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136772764015477 5ustar corecore